00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 118 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3619 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.031 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.080 Using shallow fetch with depth 1 00:00:00.080 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.080 > git --version # timeout=10 00:00:00.115 > git --version # 'git version 2.39.2' 00:00:00.115 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.167 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.167 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.822 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.834 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.845 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.845 > git config core.sparsecheckout # timeout=10 00:00:03.856 > git read-tree -mu HEAD # timeout=10 00:00:03.871 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.890 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.891 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.982 [Pipeline] Start of Pipeline 00:00:03.995 [Pipeline] library 00:00:03.997 Loading library shm_lib@master 00:00:03.997 Library shm_lib@master is cached. Copying from home. 00:00:04.012 [Pipeline] node 00:00:04.024 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.026 [Pipeline] { 00:00:04.036 [Pipeline] catchError 00:00:04.037 [Pipeline] { 00:00:04.051 [Pipeline] wrap 00:00:04.062 [Pipeline] { 00:00:04.070 [Pipeline] stage 00:00:04.072 [Pipeline] { (Prologue) 00:00:04.085 [Pipeline] echo 00:00:04.086 Node: VM-host-WFP7 00:00:04.091 [Pipeline] cleanWs 00:00:04.103 [WS-CLEANUP] Deleting project workspace... 00:00:04.103 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.109 [WS-CLEANUP] done 00:00:04.317 [Pipeline] setCustomBuildProperty 00:00:04.415 [Pipeline] httpRequest 00:00:04.795 [Pipeline] echo 00:00:04.796 Sorcerer 10.211.164.101 is alive 00:00:04.803 [Pipeline] retry 00:00:04.804 [Pipeline] { 00:00:04.814 [Pipeline] httpRequest 00:00:04.818 HttpMethod: GET 00:00:04.819 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.819 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.820 Response Code: HTTP/1.1 200 OK 00:00:04.821 Success: Status code 200 is in the accepted range: 200,404 00:00:04.821 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.965 [Pipeline] } 00:00:04.984 [Pipeline] // retry 00:00:04.991 [Pipeline] sh 00:00:05.275 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.290 [Pipeline] httpRequest 00:00:05.675 [Pipeline] echo 00:00:05.676 Sorcerer 10.211.164.101 is alive 00:00:05.686 [Pipeline] retry 00:00:05.687 [Pipeline] { 00:00:05.699 [Pipeline] httpRequest 00:00:05.703 HttpMethod: GET 00:00:05.704 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.704 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.710 Response Code: HTTP/1.1 200 OK 00:00:05.711 Success: Status code 200 is in the accepted range: 200,404 00:00:05.711 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:22.912 [Pipeline] } 00:01:22.929 [Pipeline] // retry 00:01:22.937 [Pipeline] sh 00:01:23.223 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:25.812 [Pipeline] sh 00:01:26.099 + git -C spdk log --oneline -n5 00:01:26.099 b18e1bd62 version: v24.09.1-pre 00:01:26.099 19524ad45 version: v24.09 00:01:26.099 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:26.099 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:26.099 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:26.119 [Pipeline] withCredentials 00:01:26.131 > git --version # timeout=10 00:01:26.145 > git --version # 'git version 2.39.2' 00:01:26.163 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.165 [Pipeline] { 00:01:26.175 [Pipeline] retry 00:01:26.177 [Pipeline] { 00:01:26.192 [Pipeline] sh 00:01:26.477 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:26.750 [Pipeline] } 00:01:26.768 [Pipeline] // retry 00:01:26.772 [Pipeline] } 00:01:26.788 [Pipeline] // withCredentials 00:01:26.797 [Pipeline] httpRequest 00:01:27.180 [Pipeline] echo 00:01:27.182 Sorcerer 10.211.164.101 is alive 00:01:27.191 [Pipeline] retry 00:01:27.193 [Pipeline] { 00:01:27.207 [Pipeline] httpRequest 00:01:27.212 HttpMethod: GET 00:01:27.212 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.213 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.216 Response Code: HTTP/1.1 200 OK 00:01:27.217 Success: Status code 200 is in the accepted range: 200,404 00:01:27.218 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:03.671 [Pipeline] } 00:03:03.689 [Pipeline] // retry 00:03:03.697 [Pipeline] sh 00:03:03.982 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:05.374 [Pipeline] sh 00:03:05.659 + git -C dpdk log --oneline -n5 00:03:05.659 eeb0605f11 version: 23.11.0 00:03:05.659 238778122a doc: update release notes for 23.11 00:03:05.659 46aa6b3cfc doc: fix description of RSS features 00:03:05.659 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:05.659 7e421ae345 devtools: support skipping forbid rule check 00:03:05.677 [Pipeline] writeFile 00:03:05.692 [Pipeline] sh 00:03:05.980 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:05.993 [Pipeline] sh 00:03:06.277 + cat autorun-spdk.conf 00:03:06.277 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.277 SPDK_RUN_ASAN=1 00:03:06.277 SPDK_RUN_UBSAN=1 00:03:06.277 SPDK_TEST_RAID=1 00:03:06.277 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:06.277 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:06.277 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.285 RUN_NIGHTLY=1 00:03:06.287 [Pipeline] } 00:03:06.300 [Pipeline] // stage 00:03:06.315 [Pipeline] stage 00:03:06.318 [Pipeline] { (Run VM) 00:03:06.331 [Pipeline] sh 00:03:06.616 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:06.616 + echo 'Start stage prepare_nvme.sh' 00:03:06.616 Start stage prepare_nvme.sh 00:03:06.616 + [[ -n 1 ]] 00:03:06.616 + disk_prefix=ex1 00:03:06.616 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:03:06.616 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:03:06.616 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:03:06.616 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.616 ++ SPDK_RUN_ASAN=1 00:03:06.616 ++ SPDK_RUN_UBSAN=1 00:03:06.616 ++ SPDK_TEST_RAID=1 00:03:06.616 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:06.616 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:06.616 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.616 ++ RUN_NIGHTLY=1 00:03:06.616 + cd /var/jenkins/workspace/raid-vg-autotest 00:03:06.616 + nvme_files=() 00:03:06.616 + declare -A nvme_files 00:03:06.616 + backend_dir=/var/lib/libvirt/images/backends 00:03:06.616 + nvme_files['nvme.img']=5G 00:03:06.616 + nvme_files['nvme-cmb.img']=5G 00:03:06.616 + nvme_files['nvme-multi0.img']=4G 00:03:06.616 + nvme_files['nvme-multi1.img']=4G 00:03:06.616 + nvme_files['nvme-multi2.img']=4G 00:03:06.616 + nvme_files['nvme-openstack.img']=8G 00:03:06.616 + nvme_files['nvme-zns.img']=5G 00:03:06.616 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:06.616 + (( SPDK_TEST_FTL == 1 )) 00:03:06.616 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:06.616 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:06.616 + for nvme in "${!nvme_files[@]}" 00:03:06.616 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:03:06.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:06.928 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:03:06.928 + echo 'End stage prepare_nvme.sh' 00:03:06.928 End stage prepare_nvme.sh 00:03:06.941 [Pipeline] sh 00:03:07.225 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:07.225 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:03:07.225 00:03:07.225 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:03:07.225 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:03:07.225 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:03:07.225 HELP=0 00:03:07.225 DRY_RUN=0 00:03:07.225 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:03:07.225 NVME_DISKS_TYPE=nvme,nvme, 00:03:07.225 NVME_AUTO_CREATE=0 00:03:07.225 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:03:07.225 NVME_CMB=,, 00:03:07.225 NVME_PMR=,, 00:03:07.225 NVME_ZNS=,, 00:03:07.225 NVME_MS=,, 00:03:07.225 NVME_FDP=,, 00:03:07.225 SPDK_VAGRANT_DISTRO=fedora39 00:03:07.225 SPDK_VAGRANT_VMCPU=10 00:03:07.225 SPDK_VAGRANT_VMRAM=12288 00:03:07.225 SPDK_VAGRANT_PROVIDER=libvirt 00:03:07.225 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:07.225 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:07.225 SPDK_OPENSTACK_NETWORK=0 00:03:07.225 VAGRANT_PACKAGE_BOX=0 00:03:07.225 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:07.225 FORCE_DISTRO=true 00:03:07.225 VAGRANT_BOX_VERSION= 00:03:07.225 EXTRA_VAGRANTFILES= 00:03:07.225 NIC_MODEL=virtio 00:03:07.225 00:03:07.225 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:03:07.225 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:03:09.772 Bringing machine 'default' up with 'libvirt' provider... 00:03:09.772 ==> default: Creating image (snapshot of base box volume). 00:03:10.031 ==> default: Creating domain with the following settings... 00:03:10.031 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731084279_b6bd484e6a0ed882bb5e 00:03:10.031 ==> default: -- Domain type: kvm 00:03:10.031 ==> default: -- Cpus: 10 00:03:10.031 ==> default: -- Feature: acpi 00:03:10.031 ==> default: -- Feature: apic 00:03:10.031 ==> default: -- Feature: pae 00:03:10.031 ==> default: -- Memory: 12288M 00:03:10.031 ==> default: -- Memory Backing: hugepages: 00:03:10.031 ==> default: -- Management MAC: 00:03:10.031 ==> default: -- Loader: 00:03:10.031 ==> default: -- Nvram: 00:03:10.031 ==> default: -- Base box: spdk/fedora39 00:03:10.031 ==> default: -- Storage pool: default 00:03:10.031 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731084279_b6bd484e6a0ed882bb5e.img (20G) 00:03:10.031 ==> default: -- Volume Cache: default 00:03:10.031 ==> default: -- Kernel: 00:03:10.031 ==> default: -- Initrd: 00:03:10.031 ==> default: -- Graphics Type: vnc 00:03:10.031 ==> default: -- Graphics Port: -1 00:03:10.031 ==> default: -- Graphics IP: 127.0.0.1 00:03:10.031 ==> default: -- Graphics Password: Not defined 00:03:10.031 ==> default: -- Video Type: cirrus 00:03:10.031 ==> default: -- Video VRAM: 9216 00:03:10.031 ==> default: -- Sound Type: 00:03:10.031 ==> default: -- Keymap: en-us 00:03:10.031 ==> default: -- TPM Path: 00:03:10.031 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:10.031 ==> default: -- Command line args: 00:03:10.031 ==> default: -> value=-device, 00:03:10.031 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:10.031 ==> default: -> value=-drive, 00:03:10.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:03:10.031 ==> default: -> value=-device, 00:03:10.031 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:10.031 ==> default: -> value=-device, 00:03:10.031 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:10.031 ==> default: -> value=-drive, 00:03:10.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:10.031 ==> default: -> value=-device, 00:03:10.031 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:10.031 ==> default: -> value=-drive, 00:03:10.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:10.031 ==> default: -> value=-device, 00:03:10.031 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:10.031 ==> default: -> value=-drive, 00:03:10.031 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:10.031 ==> default: -> value=-device, 00:03:10.031 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:10.031 ==> default: Creating shared folders metadata... 00:03:10.031 ==> default: Starting domain. 00:03:11.418 ==> default: Waiting for domain to get an IP address... 00:03:29.524 ==> default: Waiting for SSH to become available... 00:03:29.524 ==> default: Configuring and enabling network interfaces... 00:03:34.804 default: SSH address: 192.168.121.93:22 00:03:34.804 default: SSH username: vagrant 00:03:34.804 default: SSH auth method: private key 00:03:36.712 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:44.845 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:03:51.471 ==> default: Mounting SSHFS shared folder... 00:03:53.381 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:53.381 ==> default: Checking Mount.. 00:03:54.762 ==> default: Folder Successfully Mounted! 00:03:54.762 ==> default: Running provisioner: file... 00:03:56.142 default: ~/.gitconfig => .gitconfig 00:03:56.402 00:03:56.402 SUCCESS! 00:03:56.402 00:03:56.402 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:56.402 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:56.402 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:56.402 00:03:56.412 [Pipeline] } 00:03:56.426 [Pipeline] // stage 00:03:56.434 [Pipeline] dir 00:03:56.434 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:03:56.436 [Pipeline] { 00:03:56.447 [Pipeline] catchError 00:03:56.449 [Pipeline] { 00:03:56.461 [Pipeline] sh 00:03:56.743 + vagrant ssh-config --host vagrant 00:03:56.743 + sed -ne /^Host/,$p 00:03:56.743 + tee ssh_conf 00:03:59.281 Host vagrant 00:03:59.281 HostName 192.168.121.93 00:03:59.281 User vagrant 00:03:59.281 Port 22 00:03:59.281 UserKnownHostsFile /dev/null 00:03:59.281 StrictHostKeyChecking no 00:03:59.281 PasswordAuthentication no 00:03:59.281 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:59.281 IdentitiesOnly yes 00:03:59.281 LogLevel FATAL 00:03:59.281 ForwardAgent yes 00:03:59.281 ForwardX11 yes 00:03:59.281 00:03:59.295 [Pipeline] withEnv 00:03:59.297 [Pipeline] { 00:03:59.310 [Pipeline] sh 00:03:59.593 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:59.593 source /etc/os-release 00:03:59.593 [[ -e /image.version ]] && img=$(< /image.version) 00:03:59.593 # Minimal, systemd-like check. 00:03:59.593 if [[ -e /.dockerenv ]]; then 00:03:59.593 # Clear garbage from the node's name: 00:03:59.593 # agt-er_autotest_547-896 -> autotest_547-896 00:03:59.593 # $HOSTNAME is the actual container id 00:03:59.593 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:59.593 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:59.593 # We can assume this is a mount from a host where container is running, 00:03:59.593 # so fetch its hostname to easily identify the target swarm worker. 00:03:59.593 container="$(< /etc/hostname) ($agent)" 00:03:59.593 else 00:03:59.593 # Fallback 00:03:59.593 container=$agent 00:03:59.593 fi 00:03:59.593 fi 00:03:59.593 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:59.593 00:03:59.865 [Pipeline] } 00:03:59.880 [Pipeline] // withEnv 00:03:59.890 [Pipeline] setCustomBuildProperty 00:03:59.907 [Pipeline] stage 00:03:59.909 [Pipeline] { (Tests) 00:03:59.926 [Pipeline] sh 00:04:00.208 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:00.480 [Pipeline] sh 00:04:00.761 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:01.037 [Pipeline] timeout 00:04:01.037 Timeout set to expire in 1 hr 30 min 00:04:01.039 [Pipeline] { 00:04:01.053 [Pipeline] sh 00:04:01.336 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:01.904 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:04:01.917 [Pipeline] sh 00:04:02.198 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:02.472 [Pipeline] sh 00:04:02.755 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:03.032 [Pipeline] sh 00:04:03.354 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:03.630 ++ readlink -f spdk_repo 00:04:03.630 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:03.630 + [[ -n /home/vagrant/spdk_repo ]] 00:04:03.630 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:03.630 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:03.630 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:03.630 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:03.630 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:03.630 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:03.630 + cd /home/vagrant/spdk_repo 00:04:03.630 + source /etc/os-release 00:04:03.630 ++ NAME='Fedora Linux' 00:04:03.630 ++ VERSION='39 (Cloud Edition)' 00:04:03.630 ++ ID=fedora 00:04:03.630 ++ VERSION_ID=39 00:04:03.630 ++ VERSION_CODENAME= 00:04:03.630 ++ PLATFORM_ID=platform:f39 00:04:03.630 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:03.630 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:03.630 ++ LOGO=fedora-logo-icon 00:04:03.630 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:03.630 ++ HOME_URL=https://fedoraproject.org/ 00:04:03.630 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:03.630 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:03.630 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:03.630 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:03.630 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:03.630 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:03.630 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:03.630 ++ SUPPORT_END=2024-11-12 00:04:03.630 ++ VARIANT='Cloud Edition' 00:04:03.630 ++ VARIANT_ID=cloud 00:04:03.630 + uname -a 00:04:03.630 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:03.630 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:04.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.200 Hugepages 00:04:04.200 node hugesize free / total 00:04:04.200 node0 1048576kB 0 / 0 00:04:04.200 node0 2048kB 0 / 0 00:04:04.200 00:04:04.200 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.200 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:04.200 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.200 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:04.200 + rm -f /tmp/spdk-ld-path 00:04:04.200 + source autorun-spdk.conf 00:04:04.200 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:04.200 ++ SPDK_RUN_ASAN=1 00:04:04.200 ++ SPDK_RUN_UBSAN=1 00:04:04.200 ++ SPDK_TEST_RAID=1 00:04:04.200 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:04:04.200 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:04.200 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:04.200 ++ RUN_NIGHTLY=1 00:04:04.200 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:04.200 + [[ -n '' ]] 00:04:04.200 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:04.200 + for M in /var/spdk/build-*-manifest.txt 00:04:04.200 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:04.200 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:04.200 + for M in /var/spdk/build-*-manifest.txt 00:04:04.200 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:04.200 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:04.200 + for M in /var/spdk/build-*-manifest.txt 00:04:04.200 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:04.200 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:04.460 ++ uname 00:04:04.460 + [[ Linux == \L\i\n\u\x ]] 00:04:04.460 + sudo dmesg -T 00:04:04.460 + sudo dmesg --clear 00:04:04.460 + dmesg_pid=6166 00:04:04.460 + sudo dmesg -Tw 00:04:04.460 + [[ Fedora Linux == FreeBSD ]] 00:04:04.460 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:04.460 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:04.460 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:04.460 + [[ -x /usr/src/fio-static/fio ]] 00:04:04.460 + export FIO_BIN=/usr/src/fio-static/fio 00:04:04.460 + FIO_BIN=/usr/src/fio-static/fio 00:04:04.460 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:04.460 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:04.460 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:04.460 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:04.460 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:04.460 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:04.460 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:04.460 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:04.460 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:04.460 Test configuration: 00:04:04.460 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:04.460 SPDK_RUN_ASAN=1 00:04:04.460 SPDK_RUN_UBSAN=1 00:04:04.460 SPDK_TEST_RAID=1 00:04:04.460 SPDK_TEST_NATIVE_DPDK=v23.11 00:04:04.460 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:04.460 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:04.460 RUN_NIGHTLY=1 16:45:33 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:04:04.460 16:45:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.460 16:45:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:04.460 16:45:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:04.460 16:45:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.460 16:45:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.460 16:45:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.460 16:45:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.460 16:45:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.460 16:45:33 -- paths/export.sh@5 -- $ export PATH 00:04:04.460 16:45:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.460 16:45:33 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:04.460 16:45:33 -- common/autobuild_common.sh@479 -- $ date +%s 00:04:04.460 16:45:33 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731084333.XXXXXX 00:04:04.460 16:45:33 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731084333.WJ354D 00:04:04.460 16:45:33 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:04:04.460 16:45:33 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:04:04.460 16:45:33 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:04:04.720 16:45:33 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:04:04.720 16:45:33 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:04.720 16:45:33 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:04.720 16:45:33 -- common/autobuild_common.sh@495 -- $ get_config_params 00:04:04.720 16:45:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:04.720 16:45:33 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.720 16:45:34 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:04:04.720 16:45:34 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:04:04.720 16:45:34 -- pm/common@17 -- $ local monitor 00:04:04.720 16:45:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.720 16:45:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.720 16:45:34 -- pm/common@25 -- $ sleep 1 00:04:04.720 16:45:34 -- pm/common@21 -- $ date +%s 00:04:04.720 16:45:34 -- pm/common@21 -- $ date +%s 00:04:04.720 16:45:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731084334 00:04:04.720 16:45:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731084334 00:04:04.720 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731084334_collect-vmstat.pm.log 00:04:04.720 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731084334_collect-cpu-load.pm.log 00:04:05.659 16:45:35 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:04:05.659 16:45:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:05.659 16:45:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:05.659 16:45:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:05.659 16:45:35 -- spdk/autobuild.sh@16 -- $ date -u 00:04:05.659 Fri Nov 8 04:45:35 PM UTC 2024 00:04:05.659 16:45:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:05.659 v24.09-rc1-9-gb18e1bd62 00:04:05.659 16:45:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:05.659 16:45:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:05.660 16:45:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:05.660 16:45:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:05.660 16:45:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.660 ************************************ 00:04:05.660 START TEST asan 00:04:05.660 ************************************ 00:04:05.660 using asan 00:04:05.660 16:45:35 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:04:05.660 00:04:05.660 real 0m0.001s 00:04:05.660 user 0m0.000s 00:04:05.660 sys 0m0.000s 00:04:05.660 16:45:35 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:05.660 16:45:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:05.660 ************************************ 00:04:05.660 END TEST asan 00:04:05.660 ************************************ 00:04:05.660 16:45:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:05.660 16:45:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:05.660 16:45:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:05.660 16:45:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:05.660 16:45:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.660 ************************************ 00:04:05.660 START TEST ubsan 00:04:05.660 ************************************ 00:04:05.660 using ubsan 00:04:05.660 16:45:35 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:05.660 00:04:05.660 real 0m0.000s 00:04:05.660 user 0m0.000s 00:04:05.660 sys 0m0.000s 00:04:05.660 16:45:35 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:05.660 16:45:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:05.660 ************************************ 00:04:05.660 END TEST ubsan 00:04:05.660 ************************************ 00:04:05.920 16:45:35 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:04:05.920 16:45:35 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:04:05.920 16:45:35 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:04:05.920 16:45:35 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:04:05.920 16:45:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:05.920 16:45:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.920 ************************************ 00:04:05.920 START TEST build_native_dpdk 00:04:05.920 ************************************ 00:04:05.920 16:45:35 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:04:05.920 eeb0605f11 version: 23.11.0 00:04:05.920 238778122a doc: update release notes for 23.11 00:04:05.920 46aa6b3cfc doc: fix description of RSS features 00:04:05.920 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:04:05.920 7e421ae345 devtools: support skipping forbid rule check 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:04:05.920 patching file config/rte_config.h 00:04:05.920 Hunk #1 succeeded at 60 (offset 1 line). 00:04:05.920 16:45:35 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:04:05.920 16:45:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:04:05.921 patching file lib/pcapng/rte_pcapng.c 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:04:05.921 16:45:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:04:05.921 16:45:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:04:11.199 The Meson build system 00:04:11.199 Version: 1.5.0 00:04:11.199 Source dir: /home/vagrant/spdk_repo/dpdk 00:04:11.199 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:04:11.199 Build type: native build 00:04:11.199 Program cat found: YES (/usr/bin/cat) 00:04:11.199 Project name: DPDK 00:04:11.199 Project version: 23.11.0 00:04:11.199 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:11.199 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:11.199 Host machine cpu family: x86_64 00:04:11.199 Host machine cpu: x86_64 00:04:11.199 Message: ## Building in Developer Mode ## 00:04:11.199 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:11.199 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:04:11.199 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:04:11.199 Program python3 found: YES (/usr/bin/python3) 00:04:11.199 Program cat found: YES (/usr/bin/cat) 00:04:11.199 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:04:11.199 Compiler for C supports arguments -march=native: YES 00:04:11.199 Checking for size of "void *" : 8 00:04:11.199 Checking for size of "void *" : 8 (cached) 00:04:11.199 Library m found: YES 00:04:11.199 Library numa found: YES 00:04:11.199 Has header "numaif.h" : YES 00:04:11.199 Library fdt found: NO 00:04:11.199 Library execinfo found: NO 00:04:11.199 Has header "execinfo.h" : YES 00:04:11.199 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:11.199 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:11.199 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:11.199 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:11.199 Run-time dependency openssl found: YES 3.1.1 00:04:11.199 Run-time dependency libpcap found: YES 1.10.4 00:04:11.199 Has header "pcap.h" with dependency libpcap: YES 00:04:11.199 Compiler for C supports arguments -Wcast-qual: YES 00:04:11.199 Compiler for C supports arguments -Wdeprecated: YES 00:04:11.199 Compiler for C supports arguments -Wformat: YES 00:04:11.199 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:11.199 Compiler for C supports arguments -Wformat-security: NO 00:04:11.199 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:11.199 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:11.199 Compiler for C supports arguments -Wnested-externs: YES 00:04:11.199 Compiler for C supports arguments -Wold-style-definition: YES 00:04:11.199 Compiler for C supports arguments -Wpointer-arith: YES 00:04:11.199 Compiler for C supports arguments -Wsign-compare: YES 00:04:11.199 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:11.199 Compiler for C supports arguments -Wundef: YES 00:04:11.199 Compiler for C supports arguments -Wwrite-strings: YES 00:04:11.199 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:11.199 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:11.199 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:11.199 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:11.199 Program objdump found: YES (/usr/bin/objdump) 00:04:11.199 Compiler for C supports arguments -mavx512f: YES 00:04:11.199 Checking if "AVX512 checking" compiles: YES 00:04:11.199 Fetching value of define "__SSE4_2__" : 1 00:04:11.199 Fetching value of define "__AES__" : 1 00:04:11.199 Fetching value of define "__AVX__" : 1 00:04:11.199 Fetching value of define "__AVX2__" : 1 00:04:11.199 Fetching value of define "__AVX512BW__" : 1 00:04:11.199 Fetching value of define "__AVX512CD__" : 1 00:04:11.199 Fetching value of define "__AVX512DQ__" : 1 00:04:11.199 Fetching value of define "__AVX512F__" : 1 00:04:11.199 Fetching value of define "__AVX512VL__" : 1 00:04:11.199 Fetching value of define "__PCLMUL__" : 1 00:04:11.199 Fetching value of define "__RDRND__" : 1 00:04:11.199 Fetching value of define "__RDSEED__" : 1 00:04:11.199 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:11.199 Fetching value of define "__znver1__" : (undefined) 00:04:11.199 Fetching value of define "__znver2__" : (undefined) 00:04:11.199 Fetching value of define "__znver3__" : (undefined) 00:04:11.199 Fetching value of define "__znver4__" : (undefined) 00:04:11.199 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:11.199 Message: lib/log: Defining dependency "log" 00:04:11.199 Message: lib/kvargs: Defining dependency "kvargs" 00:04:11.199 Message: lib/telemetry: Defining dependency "telemetry" 00:04:11.199 Checking for function "getentropy" : NO 00:04:11.199 Message: lib/eal: Defining dependency "eal" 00:04:11.199 Message: lib/ring: Defining dependency "ring" 00:04:11.199 Message: lib/rcu: Defining dependency "rcu" 00:04:11.199 Message: lib/mempool: Defining dependency "mempool" 00:04:11.199 Message: lib/mbuf: Defining dependency "mbuf" 00:04:11.199 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:11.199 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:11.199 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:11.199 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:11.199 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:11.199 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:11.199 Compiler for C supports arguments -mpclmul: YES 00:04:11.199 Compiler for C supports arguments -maes: YES 00:04:11.199 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:11.199 Compiler for C supports arguments -mavx512bw: YES 00:04:11.199 Compiler for C supports arguments -mavx512dq: YES 00:04:11.199 Compiler for C supports arguments -mavx512vl: YES 00:04:11.199 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:11.199 Compiler for C supports arguments -mavx2: YES 00:04:11.199 Compiler for C supports arguments -mavx: YES 00:04:11.199 Message: lib/net: Defining dependency "net" 00:04:11.199 Message: lib/meter: Defining dependency "meter" 00:04:11.199 Message: lib/ethdev: Defining dependency "ethdev" 00:04:11.199 Message: lib/pci: Defining dependency "pci" 00:04:11.199 Message: lib/cmdline: Defining dependency "cmdline" 00:04:11.199 Message: lib/metrics: Defining dependency "metrics" 00:04:11.200 Message: lib/hash: Defining dependency "hash" 00:04:11.200 Message: lib/timer: Defining dependency "timer" 00:04:11.200 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512CD__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:11.200 Message: lib/acl: Defining dependency "acl" 00:04:11.200 Message: lib/bbdev: Defining dependency "bbdev" 00:04:11.200 Message: lib/bitratestats: Defining dependency "bitratestats" 00:04:11.200 Run-time dependency libelf found: YES 0.191 00:04:11.200 Message: lib/bpf: Defining dependency "bpf" 00:04:11.200 Message: lib/cfgfile: Defining dependency "cfgfile" 00:04:11.200 Message: lib/compressdev: Defining dependency "compressdev" 00:04:11.200 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:11.200 Message: lib/distributor: Defining dependency "distributor" 00:04:11.200 Message: lib/dmadev: Defining dependency "dmadev" 00:04:11.200 Message: lib/efd: Defining dependency "efd" 00:04:11.200 Message: lib/eventdev: Defining dependency "eventdev" 00:04:11.200 Message: lib/dispatcher: Defining dependency "dispatcher" 00:04:11.200 Message: lib/gpudev: Defining dependency "gpudev" 00:04:11.200 Message: lib/gro: Defining dependency "gro" 00:04:11.200 Message: lib/gso: Defining dependency "gso" 00:04:11.200 Message: lib/ip_frag: Defining dependency "ip_frag" 00:04:11.200 Message: lib/jobstats: Defining dependency "jobstats" 00:04:11.200 Message: lib/latencystats: Defining dependency "latencystats" 00:04:11.200 Message: lib/lpm: Defining dependency "lpm" 00:04:11.200 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512IFMA__" : (undefined) 00:04:11.200 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:04:11.200 Message: lib/member: Defining dependency "member" 00:04:11.200 Message: lib/pcapng: Defining dependency "pcapng" 00:04:11.200 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:11.200 Message: lib/power: Defining dependency "power" 00:04:11.200 Message: lib/rawdev: Defining dependency "rawdev" 00:04:11.200 Message: lib/regexdev: Defining dependency "regexdev" 00:04:11.200 Message: lib/mldev: Defining dependency "mldev" 00:04:11.200 Message: lib/rib: Defining dependency "rib" 00:04:11.200 Message: lib/reorder: Defining dependency "reorder" 00:04:11.200 Message: lib/sched: Defining dependency "sched" 00:04:11.200 Message: lib/security: Defining dependency "security" 00:04:11.200 Message: lib/stack: Defining dependency "stack" 00:04:11.200 Has header "linux/userfaultfd.h" : YES 00:04:11.200 Has header "linux/vduse.h" : YES 00:04:11.200 Message: lib/vhost: Defining dependency "vhost" 00:04:11.200 Message: lib/ipsec: Defining dependency "ipsec" 00:04:11.200 Message: lib/pdcp: Defining dependency "pdcp" 00:04:11.200 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:11.200 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:11.200 Message: lib/fib: Defining dependency "fib" 00:04:11.200 Message: lib/port: Defining dependency "port" 00:04:11.200 Message: lib/pdump: Defining dependency "pdump" 00:04:11.200 Message: lib/table: Defining dependency "table" 00:04:11.200 Message: lib/pipeline: Defining dependency "pipeline" 00:04:11.200 Message: lib/graph: Defining dependency "graph" 00:04:11.200 Message: lib/node: Defining dependency "node" 00:04:11.200 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:11.200 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:11.200 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:13.108 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:13.108 Compiler for C supports arguments -Wno-sign-compare: YES 00:04:13.108 Compiler for C supports arguments -Wno-unused-value: YES 00:04:13.108 Compiler for C supports arguments -Wno-format: YES 00:04:13.108 Compiler for C supports arguments -Wno-format-security: YES 00:04:13.108 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:04:13.108 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:13.108 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:04:13.108 Compiler for C supports arguments -Wno-unused-parameter: YES 00:04:13.108 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:13.108 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:13.108 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:13.108 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:13.108 Compiler for C supports arguments -march=skylake-avx512: YES 00:04:13.108 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:04:13.108 Has header "sys/epoll.h" : YES 00:04:13.108 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:13.108 Configuring doxy-api-html.conf using configuration 00:04:13.108 Configuring doxy-api-man.conf using configuration 00:04:13.108 Program mandb found: YES (/usr/bin/mandb) 00:04:13.108 Program sphinx-build found: NO 00:04:13.108 Configuring rte_build_config.h using configuration 00:04:13.108 Message: 00:04:13.108 ================= 00:04:13.108 Applications Enabled 00:04:13.108 ================= 00:04:13.108 00:04:13.108 apps: 00:04:13.108 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:04:13.108 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:04:13.108 test-pmd, test-regex, test-sad, test-security-perf, 00:04:13.108 00:04:13.108 Message: 00:04:13.108 ================= 00:04:13.108 Libraries Enabled 00:04:13.108 ================= 00:04:13.108 00:04:13.108 libs: 00:04:13.108 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:13.108 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:04:13.108 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:04:13.108 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:04:13.108 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:04:13.108 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:04:13.108 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:04:13.108 00:04:13.108 00:04:13.108 Message: 00:04:13.108 =============== 00:04:13.108 Drivers Enabled 00:04:13.108 =============== 00:04:13.108 00:04:13.108 common: 00:04:13.108 00:04:13.108 bus: 00:04:13.108 pci, vdev, 00:04:13.108 mempool: 00:04:13.108 ring, 00:04:13.108 dma: 00:04:13.108 00:04:13.108 net: 00:04:13.108 i40e, 00:04:13.108 raw: 00:04:13.108 00:04:13.108 crypto: 00:04:13.108 00:04:13.108 compress: 00:04:13.108 00:04:13.108 regex: 00:04:13.108 00:04:13.108 ml: 00:04:13.108 00:04:13.108 vdpa: 00:04:13.108 00:04:13.108 event: 00:04:13.108 00:04:13.108 baseband: 00:04:13.108 00:04:13.108 gpu: 00:04:13.108 00:04:13.108 00:04:13.108 Message: 00:04:13.108 ================= 00:04:13.108 Content Skipped 00:04:13.108 ================= 00:04:13.108 00:04:13.108 apps: 00:04:13.108 00:04:13.108 libs: 00:04:13.108 00:04:13.108 drivers: 00:04:13.108 common/cpt: not in enabled drivers build config 00:04:13.108 common/dpaax: not in enabled drivers build config 00:04:13.108 common/iavf: not in enabled drivers build config 00:04:13.108 common/idpf: not in enabled drivers build config 00:04:13.108 common/mvep: not in enabled drivers build config 00:04:13.108 common/octeontx: not in enabled drivers build config 00:04:13.108 bus/auxiliary: not in enabled drivers build config 00:04:13.108 bus/cdx: not in enabled drivers build config 00:04:13.108 bus/dpaa: not in enabled drivers build config 00:04:13.108 bus/fslmc: not in enabled drivers build config 00:04:13.108 bus/ifpga: not in enabled drivers build config 00:04:13.108 bus/platform: not in enabled drivers build config 00:04:13.108 bus/vmbus: not in enabled drivers build config 00:04:13.108 common/cnxk: not in enabled drivers build config 00:04:13.108 common/mlx5: not in enabled drivers build config 00:04:13.108 common/nfp: not in enabled drivers build config 00:04:13.108 common/qat: not in enabled drivers build config 00:04:13.108 common/sfc_efx: not in enabled drivers build config 00:04:13.108 mempool/bucket: not in enabled drivers build config 00:04:13.108 mempool/cnxk: not in enabled drivers build config 00:04:13.108 mempool/dpaa: not in enabled drivers build config 00:04:13.108 mempool/dpaa2: not in enabled drivers build config 00:04:13.108 mempool/octeontx: not in enabled drivers build config 00:04:13.108 mempool/stack: not in enabled drivers build config 00:04:13.108 dma/cnxk: not in enabled drivers build config 00:04:13.108 dma/dpaa: not in enabled drivers build config 00:04:13.108 dma/dpaa2: not in enabled drivers build config 00:04:13.108 dma/hisilicon: not in enabled drivers build config 00:04:13.108 dma/idxd: not in enabled drivers build config 00:04:13.108 dma/ioat: not in enabled drivers build config 00:04:13.108 dma/skeleton: not in enabled drivers build config 00:04:13.108 net/af_packet: not in enabled drivers build config 00:04:13.108 net/af_xdp: not in enabled drivers build config 00:04:13.108 net/ark: not in enabled drivers build config 00:04:13.108 net/atlantic: not in enabled drivers build config 00:04:13.108 net/avp: not in enabled drivers build config 00:04:13.108 net/axgbe: not in enabled drivers build config 00:04:13.108 net/bnx2x: not in enabled drivers build config 00:04:13.108 net/bnxt: not in enabled drivers build config 00:04:13.108 net/bonding: not in enabled drivers build config 00:04:13.108 net/cnxk: not in enabled drivers build config 00:04:13.109 net/cpfl: not in enabled drivers build config 00:04:13.109 net/cxgbe: not in enabled drivers build config 00:04:13.109 net/dpaa: not in enabled drivers build config 00:04:13.109 net/dpaa2: not in enabled drivers build config 00:04:13.109 net/e1000: not in enabled drivers build config 00:04:13.109 net/ena: not in enabled drivers build config 00:04:13.109 net/enetc: not in enabled drivers build config 00:04:13.109 net/enetfec: not in enabled drivers build config 00:04:13.109 net/enic: not in enabled drivers build config 00:04:13.109 net/failsafe: not in enabled drivers build config 00:04:13.109 net/fm10k: not in enabled drivers build config 00:04:13.109 net/gve: not in enabled drivers build config 00:04:13.109 net/hinic: not in enabled drivers build config 00:04:13.109 net/hns3: not in enabled drivers build config 00:04:13.109 net/iavf: not in enabled drivers build config 00:04:13.109 net/ice: not in enabled drivers build config 00:04:13.109 net/idpf: not in enabled drivers build config 00:04:13.109 net/igc: not in enabled drivers build config 00:04:13.109 net/ionic: not in enabled drivers build config 00:04:13.109 net/ipn3ke: not in enabled drivers build config 00:04:13.109 net/ixgbe: not in enabled drivers build config 00:04:13.109 net/mana: not in enabled drivers build config 00:04:13.109 net/memif: not in enabled drivers build config 00:04:13.109 net/mlx4: not in enabled drivers build config 00:04:13.109 net/mlx5: not in enabled drivers build config 00:04:13.109 net/mvneta: not in enabled drivers build config 00:04:13.109 net/mvpp2: not in enabled drivers build config 00:04:13.109 net/netvsc: not in enabled drivers build config 00:04:13.109 net/nfb: not in enabled drivers build config 00:04:13.109 net/nfp: not in enabled drivers build config 00:04:13.109 net/ngbe: not in enabled drivers build config 00:04:13.109 net/null: not in enabled drivers build config 00:04:13.109 net/octeontx: not in enabled drivers build config 00:04:13.109 net/octeon_ep: not in enabled drivers build config 00:04:13.109 net/pcap: not in enabled drivers build config 00:04:13.109 net/pfe: not in enabled drivers build config 00:04:13.109 net/qede: not in enabled drivers build config 00:04:13.109 net/ring: not in enabled drivers build config 00:04:13.109 net/sfc: not in enabled drivers build config 00:04:13.109 net/softnic: not in enabled drivers build config 00:04:13.109 net/tap: not in enabled drivers build config 00:04:13.109 net/thunderx: not in enabled drivers build config 00:04:13.109 net/txgbe: not in enabled drivers build config 00:04:13.109 net/vdev_netvsc: not in enabled drivers build config 00:04:13.109 net/vhost: not in enabled drivers build config 00:04:13.109 net/virtio: not in enabled drivers build config 00:04:13.109 net/vmxnet3: not in enabled drivers build config 00:04:13.109 raw/cnxk_bphy: not in enabled drivers build config 00:04:13.109 raw/cnxk_gpio: not in enabled drivers build config 00:04:13.109 raw/dpaa2_cmdif: not in enabled drivers build config 00:04:13.109 raw/ifpga: not in enabled drivers build config 00:04:13.109 raw/ntb: not in enabled drivers build config 00:04:13.109 raw/skeleton: not in enabled drivers build config 00:04:13.109 crypto/armv8: not in enabled drivers build config 00:04:13.109 crypto/bcmfs: not in enabled drivers build config 00:04:13.109 crypto/caam_jr: not in enabled drivers build config 00:04:13.109 crypto/ccp: not in enabled drivers build config 00:04:13.109 crypto/cnxk: not in enabled drivers build config 00:04:13.109 crypto/dpaa_sec: not in enabled drivers build config 00:04:13.109 crypto/dpaa2_sec: not in enabled drivers build config 00:04:13.109 crypto/ipsec_mb: not in enabled drivers build config 00:04:13.109 crypto/mlx5: not in enabled drivers build config 00:04:13.109 crypto/mvsam: not in enabled drivers build config 00:04:13.109 crypto/nitrox: not in enabled drivers build config 00:04:13.109 crypto/null: not in enabled drivers build config 00:04:13.109 crypto/octeontx: not in enabled drivers build config 00:04:13.109 crypto/openssl: not in enabled drivers build config 00:04:13.109 crypto/scheduler: not in enabled drivers build config 00:04:13.109 crypto/uadk: not in enabled drivers build config 00:04:13.109 crypto/virtio: not in enabled drivers build config 00:04:13.109 compress/isal: not in enabled drivers build config 00:04:13.109 compress/mlx5: not in enabled drivers build config 00:04:13.109 compress/octeontx: not in enabled drivers build config 00:04:13.109 compress/zlib: not in enabled drivers build config 00:04:13.109 regex/mlx5: not in enabled drivers build config 00:04:13.109 regex/cn9k: not in enabled drivers build config 00:04:13.109 ml/cnxk: not in enabled drivers build config 00:04:13.109 vdpa/ifc: not in enabled drivers build config 00:04:13.109 vdpa/mlx5: not in enabled drivers build config 00:04:13.109 vdpa/nfp: not in enabled drivers build config 00:04:13.109 vdpa/sfc: not in enabled drivers build config 00:04:13.109 event/cnxk: not in enabled drivers build config 00:04:13.109 event/dlb2: not in enabled drivers build config 00:04:13.109 event/dpaa: not in enabled drivers build config 00:04:13.109 event/dpaa2: not in enabled drivers build config 00:04:13.109 event/dsw: not in enabled drivers build config 00:04:13.109 event/opdl: not in enabled drivers build config 00:04:13.109 event/skeleton: not in enabled drivers build config 00:04:13.109 event/sw: not in enabled drivers build config 00:04:13.109 event/octeontx: not in enabled drivers build config 00:04:13.109 baseband/acc: not in enabled drivers build config 00:04:13.109 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:04:13.109 baseband/fpga_lte_fec: not in enabled drivers build config 00:04:13.109 baseband/la12xx: not in enabled drivers build config 00:04:13.109 baseband/null: not in enabled drivers build config 00:04:13.109 baseband/turbo_sw: not in enabled drivers build config 00:04:13.109 gpu/cuda: not in enabled drivers build config 00:04:13.109 00:04:13.109 00:04:13.109 Build targets in project: 217 00:04:13.109 00:04:13.109 DPDK 23.11.0 00:04:13.109 00:04:13.109 User defined options 00:04:13.109 libdir : lib 00:04:13.109 prefix : /home/vagrant/spdk_repo/dpdk/build 00:04:13.109 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:04:13.109 c_link_args : 00:04:13.109 enable_docs : false 00:04:13.109 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:04:13.109 enable_kmods : false 00:04:13.109 machine : native 00:04:13.109 tests : false 00:04:13.109 00:04:13.109 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:13.109 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:04:13.109 16:45:42 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:04:13.109 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:13.369 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:13.369 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:13.369 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:13.369 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:13.369 [5/707] Linking static target lib/librte_kvargs.a 00:04:13.369 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:13.369 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:13.369 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:13.369 [9/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:13.369 [10/707] Linking static target lib/librte_log.a 00:04:13.369 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.630 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:13.630 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:13.630 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:13.630 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:13.630 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:13.889 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.889 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:13.889 [19/707] Linking target lib/librte_log.so.24.0 00:04:13.889 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:13.889 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:13.889 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:13.889 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:14.148 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:14.148 [25/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:14.148 [26/707] Linking static target lib/librte_telemetry.a 00:04:14.148 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:14.148 [28/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:14.148 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:14.148 [30/707] Linking target lib/librte_kvargs.so.24.0 00:04:14.148 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:14.148 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:14.408 [33/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:14.408 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:14.408 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:14.408 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:14.408 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:14.408 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:14.408 [39/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.408 [40/707] Linking target lib/librte_telemetry.so.24.0 00:04:14.408 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:14.408 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:14.408 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:14.667 [44/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:14.667 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:14.667 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:14.927 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:14.927 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:14.927 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:14.927 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:14.927 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:14.927 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:14.927 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:14.927 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:15.186 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:15.186 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:15.186 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:15.186 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:15.186 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:15.186 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:15.186 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:15.186 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:15.186 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:15.186 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:15.186 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:15.186 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:15.445 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:15.445 [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:15.445 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:15.445 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:15.445 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:15.445 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:15.705 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:15.705 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:15.705 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:15.705 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:15.705 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:15.705 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:15.964 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:15.964 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:15.964 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:15.964 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:15.964 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:15.964 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:15.964 [85/707] Linking static target lib/librte_ring.a 00:04:15.964 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:16.223 [87/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.223 [88/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:16.223 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:16.223 [90/707] Linking static target lib/librte_eal.a 00:04:16.223 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:16.223 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:16.223 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:16.223 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:16.223 [95/707] Linking static target lib/librte_mempool.a 00:04:16.482 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:16.482 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:16.482 [98/707] Linking static target lib/librte_rcu.a 00:04:16.742 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:16.742 [100/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:16.742 [101/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:16.742 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:16.742 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:16.742 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:16.742 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.742 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.742 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:17.001 [108/707] Linking static target lib/librte_net.a 00:04:17.001 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:17.001 [110/707] Linking static target lib/librte_mbuf.a 00:04:17.001 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:17.001 [112/707] Linking static target lib/librte_meter.a 00:04:17.001 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:17.001 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.260 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:17.260 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:17.260 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.260 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:17.520 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.520 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:17.520 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:17.779 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:17.779 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:17.779 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:18.038 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:18.038 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:18.038 [127/707] Linking static target lib/librte_pci.a 00:04:18.038 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:18.038 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:18.038 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:18.038 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:18.038 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:18.038 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.297 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:18.297 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:18.297 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:18.297 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:18.297 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:18.297 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:18.297 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:18.297 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:18.297 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:18.297 [143/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:18.297 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:18.297 [145/707] Linking static target lib/librte_cmdline.a 00:04:18.557 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:04:18.557 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:04:18.557 [148/707] Linking static target lib/librte_metrics.a 00:04:18.557 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:18.816 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:18.816 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.075 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:19.075 [153/707] Linking static target lib/librte_timer.a 00:04:19.075 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.075 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:19.334 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.334 [157/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:04:19.334 [158/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:04:19.334 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:04:19.593 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:04:19.852 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:04:19.852 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:04:19.852 [163/707] Linking static target lib/librte_bitratestats.a 00:04:20.112 [164/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:04:20.112 [165/707] Linking static target lib/librte_bbdev.a 00:04:20.112 [166/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:04:20.112 [167/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.112 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:04:20.370 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:04:20.628 [170/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.628 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:20.628 [172/707] Linking static target lib/librte_hash.a 00:04:20.628 [173/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:04:20.628 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:20.628 [175/707] Linking static target lib/librte_ethdev.a 00:04:20.887 [176/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:04:20.887 [177/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:04:20.887 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:04:20.887 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:04:20.887 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.887 [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:04:20.887 [182/707] Linking target lib/librte_eal.so.24.0 00:04:20.887 [183/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.887 [184/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:21.147 [185/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:04:21.147 [186/707] Linking target lib/librte_ring.so.24.0 00:04:21.147 [187/707] Linking target lib/librte_meter.so.24.0 00:04:21.147 [188/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:21.147 [189/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:21.147 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:04:21.147 [191/707] Linking target lib/librte_rcu.so.24.0 00:04:21.147 [192/707] Linking target lib/librte_mempool.so.24.0 00:04:21.147 [193/707] Linking target lib/librte_pci.so.24.0 00:04:21.406 [194/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:04:21.406 [195/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:21.406 [196/707] Linking static target lib/librte_cfgfile.a 00:04:21.406 [197/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:21.406 [198/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:21.406 [199/707] Linking target lib/librte_timer.so.24.0 00:04:21.406 [200/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:04:21.406 [201/707] Linking target lib/librte_mbuf.so.24.0 00:04:21.406 [202/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:21.406 [203/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:21.406 [204/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:21.406 [205/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:21.406 [206/707] Linking target lib/librte_net.so.24.0 00:04:21.666 [207/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:04:21.666 [208/707] Linking target lib/librte_bbdev.so.24.0 00:04:21.666 [209/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:21.666 [210/707] Linking static target lib/librte_bpf.a 00:04:21.666 [211/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.666 [212/707] Linking target lib/librte_cmdline.so.24.0 00:04:21.666 [213/707] Linking target lib/librte_hash.so.24.0 00:04:21.666 [214/707] Linking target lib/librte_cfgfile.so.24.0 00:04:21.666 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:21.666 [216/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:21.666 [217/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:21.666 [218/707] Linking static target lib/librte_compressdev.a 00:04:21.925 [219/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:04:21.925 [220/707] Linking static target lib/librte_acl.a 00:04:21.925 [221/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.925 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:21.925 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:04:21.925 [224/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.184 [225/707] Linking target lib/librte_acl.so.24.0 00:04:22.184 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:04:22.184 [227/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:04:22.184 [228/707] Linking static target lib/librte_distributor.a 00:04:22.185 [229/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:22.185 [230/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.185 [231/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:04:22.185 [232/707] Linking target lib/librte_compressdev.so.24.0 00:04:22.444 [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.444 [234/707] Linking target lib/librte_distributor.so.24.0 00:04:22.444 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:04:22.444 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:22.444 [237/707] Linking static target lib/librte_dmadev.a 00:04:22.703 [238/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.703 [239/707] Linking target lib/librte_dmadev.so.24.0 00:04:22.703 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:04:22.703 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:04:22.703 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:22.962 [243/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:04:22.962 [244/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:04:22.962 [245/707] Linking static target lib/librte_efd.a 00:04:23.222 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.222 [247/707] Linking target lib/librte_efd.so.24.0 00:04:23.222 [248/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:23.222 [249/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:04:23.222 [250/707] Linking static target lib/librte_cryptodev.a 00:04:23.222 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:04:23.222 [252/707] Linking static target lib/librte_dispatcher.a 00:04:23.481 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:04:23.481 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:04:23.481 [255/707] Linking static target lib/librte_gpudev.a 00:04:23.740 [256/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.740 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:04:23.740 [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:04:23.740 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:04:24.000 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:04:24.000 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:04:24.259 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:04:24.259 [263/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.259 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:04:24.260 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:04:24.260 [266/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.260 [267/707] Linking static target lib/librte_gro.a 00:04:24.260 [268/707] Linking target lib/librte_gpudev.so.24.0 00:04:24.260 [269/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:04:24.260 [270/707] Linking target lib/librte_cryptodev.so.24.0 00:04:24.260 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:04:24.519 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:24.519 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:04:24.519 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.519 [275/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:04:24.519 [276/707] Linking static target lib/librte_eventdev.a 00:04:24.519 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:04:24.519 [278/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.519 [279/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:04:24.778 [280/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:04:24.779 [281/707] Linking target lib/librte_ethdev.so.24.0 00:04:24.779 [282/707] Linking static target lib/librte_gso.a 00:04:24.779 [283/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:24.779 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:04:24.779 [285/707] Linking target lib/librte_metrics.so.24.0 00:04:24.779 [286/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.779 [287/707] Linking target lib/librte_bpf.so.24.0 00:04:24.779 [288/707] Linking target lib/librte_gro.so.24.0 00:04:24.779 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:04:24.779 [290/707] Linking target lib/librte_gso.so.24.0 00:04:24.779 [291/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:04:24.779 [292/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:04:25.038 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:04:25.038 [294/707] Linking target lib/librte_bitratestats.so.24.0 00:04:25.038 [295/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:04:25.038 [296/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:04:25.038 [297/707] Linking static target lib/librte_jobstats.a 00:04:25.038 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:04:25.038 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:04:25.038 [300/707] Linking static target lib/librte_ip_frag.a 00:04:25.297 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.297 [302/707] Linking target lib/librte_jobstats.so.24.0 00:04:25.297 [303/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.297 [304/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:04:25.297 [305/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:04:25.297 [306/707] Linking static target lib/librte_latencystats.a 00:04:25.297 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:04:25.297 [308/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:04:25.297 [309/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:04:25.297 [310/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:04:25.557 [311/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:04:25.557 [312/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:25.557 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.557 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:25.557 [315/707] Linking target lib/librte_latencystats.so.24.0 00:04:25.557 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:25.815 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:04:25.815 [318/707] Linking static target lib/librte_lpm.a 00:04:25.815 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:04:25.815 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:25.815 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:26.074 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:04:26.074 [323/707] Linking static target lib/librte_pcapng.a 00:04:26.074 [324/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:04:26.074 [325/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:26.074 [326/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.074 [327/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:26.074 [328/707] Linking target lib/librte_lpm.so.24.0 00:04:26.074 [329/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.074 [330/707] Linking target lib/librte_eventdev.so.24.0 00:04:26.074 [331/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:04:26.074 [332/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.074 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:26.074 [334/707] Linking target lib/librte_pcapng.so.24.0 00:04:26.333 [335/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:26.333 [336/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:04:26.333 [337/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:04:26.333 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:04:26.333 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:26.592 [340/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:26.592 [341/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:04:26.592 [342/707] Linking static target lib/librte_power.a 00:04:26.592 [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:04:26.592 [344/707] Linking static target lib/librte_regexdev.a 00:04:26.592 [345/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:04:26.592 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:04:26.592 [347/707] Linking static target lib/librte_rawdev.a 00:04:26.592 [348/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:04:26.592 [349/707] Linking static target lib/librte_member.a 00:04:26.592 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:04:26.592 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:04:26.854 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:04:26.854 [353/707] Linking static target lib/librte_mldev.a 00:04:26.854 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:04:26.854 [355/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.854 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.854 [357/707] Linking target lib/librte_member.so.24.0 00:04:26.854 [358/707] Linking target lib/librte_rawdev.so.24.0 00:04:27.121 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.121 [360/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:04:27.121 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:04:27.121 [362/707] Linking target lib/librte_power.so.24.0 00:04:27.121 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:27.121 [364/707] Linking static target lib/librte_reorder.a 00:04:27.121 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.121 [366/707] Linking target lib/librte_regexdev.so.24.0 00:04:27.121 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:04:27.121 [368/707] Linking static target lib/librte_rib.a 00:04:27.406 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:04:27.406 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:27.406 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:04:27.406 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:04:27.406 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:04:27.406 [374/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.406 [375/707] Linking static target lib/librte_stack.a 00:04:27.406 [376/707] Linking target lib/librte_reorder.so.24.0 00:04:27.406 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:04:27.406 [378/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.685 [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:27.685 [380/707] Linking static target lib/librte_security.a 00:04:27.685 [381/707] Linking target lib/librte_stack.so.24.0 00:04:27.685 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.685 [383/707] Linking target lib/librte_rib.so.24.0 00:04:27.685 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:04:27.685 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:27.685 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:27.685 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.944 [388/707] Linking target lib/librte_mldev.so.24.0 00:04:27.944 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.944 [390/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:27.944 [391/707] Linking target lib/librte_security.so.24.0 00:04:27.944 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:04:28.203 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:04:28.203 [394/707] Linking static target lib/librte_sched.a 00:04:28.203 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:28.203 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:28.462 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.462 [398/707] Linking target lib/librte_sched.so.24.0 00:04:28.462 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:28.462 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:04:28.462 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:04:28.462 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:04:28.721 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:28.721 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:04:28.980 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:04:28.980 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:04:28.980 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:04:29.240 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:04:29.240 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:04:29.240 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:04:29.240 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:04:29.240 [412/707] Linking static target lib/librte_ipsec.a 00:04:29.240 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:04:29.240 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:04:29.499 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:04:29.499 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.499 [417/707] Linking target lib/librte_ipsec.so.24.0 00:04:29.758 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:04:29.758 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:04:29.758 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:04:29.758 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:04:30.017 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:04:30.017 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:04:30.017 [424/707] Linking static target lib/librte_fib.a 00:04:30.017 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:04:30.275 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:04:30.275 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:04:30.275 [428/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:04:30.275 [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.275 [430/707] Linking static target lib/librte_pdcp.a 00:04:30.275 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:04:30.275 [432/707] Linking target lib/librte_fib.so.24.0 00:04:30.534 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.534 [434/707] Linking target lib/librte_pdcp.so.24.0 00:04:30.793 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:04:30.793 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:04:30.793 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:04:30.793 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:04:30.793 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:04:31.052 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:04:31.052 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:04:31.052 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:04:31.311 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:04:31.311 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:04:31.311 [445/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:04:31.311 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:04:31.311 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:04:31.311 [448/707] Linking static target lib/librte_port.a 00:04:31.570 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:04:31.570 [450/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:04:31.570 [451/707] Linking static target lib/librte_pdump.a 00:04:31.570 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:04:31.570 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:04:31.829 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.829 [455/707] Linking target lib/librte_port.so.24.0 00:04:31.829 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.829 [457/707] Linking target lib/librte_pdump.so.24.0 00:04:31.830 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:04:32.089 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:04:32.089 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:04:32.089 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:04:32.089 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:04:32.089 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:04:32.347 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:04:32.347 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:04:32.347 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:04:32.606 [467/707] Linking static target lib/librte_table.a 00:04:32.606 [468/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:04:32.866 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:04:32.866 [470/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:32.866 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:04:32.866 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.125 [473/707] Linking target lib/librte_table.so.24.0 00:04:33.125 [474/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:04:33.125 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:04:33.125 [476/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:04:33.384 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:04:33.384 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:04:33.384 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:04:33.644 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:04:33.644 [481/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:04:33.644 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:04:33.644 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:04:33.903 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:04:33.903 [485/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:04:33.903 [486/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:04:33.903 [487/707] Linking static target lib/librte_graph.a 00:04:33.903 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:04:34.163 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:04:34.163 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:04:34.422 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:04:34.422 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.422 [493/707] Linking target lib/librte_graph.so.24.0 00:04:34.682 [494/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:04:34.682 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:04:34.682 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:04:34.682 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:04:34.682 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:04:34.941 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:04:34.941 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:04:34.941 [501/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:04:34.941 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:34.941 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:04:34.941 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:04:35.200 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:04:35.200 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:35.200 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:35.464 [508/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:04:35.464 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:35.464 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:35.464 [511/707] Linking static target lib/librte_node.a 00:04:35.464 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:35.724 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:35.724 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:35.724 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:35.724 [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:35.724 [517/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.724 [518/707] Linking target lib/librte_node.so.24.0 00:04:35.724 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:35.724 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:35.724 [521/707] Linking static target drivers/librte_bus_pci.a 00:04:35.984 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:35.984 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:35.984 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:35.984 [525/707] Linking static target drivers/librte_bus_vdev.a 00:04:35.984 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:04:35.984 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:35.984 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:04:36.243 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:04:36.243 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.243 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:04:36.243 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:36.243 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:36.243 [534/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.243 [535/707] Linking target drivers/librte_bus_pci.so.24.0 00:04:36.243 [536/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:04:36.243 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:36.243 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:36.243 [539/707] Linking static target drivers/librte_mempool_ring.a 00:04:36.243 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:36.243 [541/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:04:36.503 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:04:36.503 [543/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:04:36.762 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:04:36.762 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:04:37.022 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:04:37.022 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:04:37.590 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:04:37.590 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:04:37.849 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:04:37.849 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:04:37.849 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:04:37.849 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:04:37.849 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:04:38.109 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:04:38.109 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:04:38.368 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:04:38.368 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:04:38.368 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:04:38.628 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:04:38.628 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:04:38.628 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:04:38.887 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:04:39.147 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:04:39.147 [565/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:04:39.147 [566/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:04:39.147 [567/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:04:39.147 [568/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:04:39.147 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:04:39.406 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:04:39.406 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:04:39.406 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:04:39.406 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:04:39.665 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:04:39.665 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:04:39.925 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:04:39.925 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:04:39.925 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:04:39.925 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:04:39.925 [580/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:04:39.925 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:04:40.184 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:04:40.184 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:04:40.184 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:04:40.184 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:40.184 [586/707] Linking static target drivers/librte_net_i40e.a 00:04:40.443 [587/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:04:40.443 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:40.443 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:04:40.443 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:04:40.703 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.703 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:04:40.703 [593/707] Linking target drivers/librte_net_i40e.so.24.0 00:04:40.703 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:04:40.962 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:04:40.962 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:04:40.962 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:04:40.962 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:04:41.221 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:04:41.480 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:04:41.480 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:04:41.480 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:04:41.480 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:04:41.480 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:04:41.740 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:04:41.740 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:04:41.740 [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:04:41.740 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:41.999 [609/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:04:41.999 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:04:41.999 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:04:41.999 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:04:42.258 [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:04:42.258 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:04:42.258 [615/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:04:42.517 [616/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:42.517 [617/707] Linking static target lib/librte_vhost.a 00:04:42.517 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:04:43.085 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:04:43.085 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:43.085 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:04:43.085 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:43.345 [623/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:04:43.345 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:04:43.345 [625/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.345 [626/707] Linking target lib/librte_vhost.so.24.0 00:04:43.345 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:04:43.604 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:04:43.604 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:04:43.604 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:04:43.604 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:04:43.604 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:04:43.604 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:04:43.604 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:04:43.864 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:04:43.864 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:04:43.864 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:04:44.124 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:04:44.124 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:04:44.124 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:44.124 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:04:44.124 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:04:44.383 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:04:44.383 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:04:44.383 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:44.643 [646/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:44.643 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:44.643 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:44.643 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:44.643 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:44.643 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:44.903 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:45.162 [653/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:45.162 [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:45.162 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:45.162 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:45.421 [657/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:45.421 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:45.421 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:45.681 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:45.681 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:45.681 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:45.940 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:45.941 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:45.941 [665/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:45.941 [666/707] Linking static target lib/librte_pipeline.a 00:04:45.941 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:46.200 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:46.200 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:46.200 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:46.460 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:46.460 [672/707] Linking target app/dpdk-dumpcap 00:04:46.460 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:46.719 [674/707] Linking target app/dpdk-graph 00:04:46.719 [675/707] Linking target app/dpdk-proc-info 00:04:46.719 [676/707] Linking target app/dpdk-pdump 00:04:46.719 [677/707] Linking target app/dpdk-test-bbdev 00:04:46.719 [678/707] Linking target app/dpdk-test-acl 00:04:46.978 [679/707] Linking target app/dpdk-test-cmdline 00:04:46.978 [680/707] Linking target app/dpdk-test-crypto-perf 00:04:46.978 [681/707] Linking target app/dpdk-test-compress-perf 00:04:46.978 [682/707] Linking target app/dpdk-test-dma-perf 00:04:47.237 [683/707] Linking target app/dpdk-test-eventdev 00:04:47.237 [684/707] Linking target app/dpdk-test-fib 00:04:47.237 [685/707] Linking target app/dpdk-test-flow-perf 00:04:47.496 [686/707] Linking target app/dpdk-test-gpudev 00:04:47.496 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:47.496 [688/707] Linking target app/dpdk-test-mldev 00:04:47.497 [689/707] Linking target app/dpdk-test-pipeline 00:04:47.497 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:47.497 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:47.763 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:47.763 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:48.038 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:48.038 [695/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:48.038 [696/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:48.038 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:48.323 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:48.323 [699/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.323 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:48.323 [701/707] Linking target app/dpdk-test-sad 00:04:48.323 [702/707] Linking target lib/librte_pipeline.so.24.0 00:04:48.596 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:48.596 [704/707] Linking target app/dpdk-test-regex 00:04:48.596 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:48.856 [706/707] Linking target app/dpdk-testpmd 00:04:48.856 [707/707] Linking target app/dpdk-test-security-perf 00:04:48.856 16:46:18 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:04:48.856 16:46:18 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:48.856 16:46:18 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:49.115 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:49.115 [0/1] Installing files. 00:04:49.377 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:49.377 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.378 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:49.379 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.380 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:49.381 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:49.382 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:49.382 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.382 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.383 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:49.646 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:49.646 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:49.646 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:49.646 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:49.646 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.646 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.647 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.648 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:49.649 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:49.649 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:49.649 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:49.649 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:49.649 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:49.649 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:49.649 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:49.649 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:49.649 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:49.649 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:49.649 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:49.649 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:49.649 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:49.649 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:49.649 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:49.649 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:49.649 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:49.649 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:49.649 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:49.649 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:49.649 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:49.649 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:49.649 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:49.649 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:49.649 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:49.649 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:49.649 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:49.649 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:49.649 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:49.649 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:49.649 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:49.649 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:49.649 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:49.649 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:49.649 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:49.649 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:49.649 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:49.649 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:49.649 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:49.649 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:49.649 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:49.649 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:49.649 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:49.649 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:49.649 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:49.649 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:49.649 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:49.649 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:49.649 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:49.649 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:49.649 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:49.649 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:49.649 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:49.649 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:49.649 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:49.649 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:49.649 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:49.650 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:49.650 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:49.650 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:49.650 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:49.650 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:49.650 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:49.650 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:49.650 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:49.650 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:49.650 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:49.650 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:49.650 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:49.650 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:49.650 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:49.650 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:49.650 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:49.650 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:49.650 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:49.650 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:49.650 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:49.650 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:49.650 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:49.650 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:49.650 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:49.650 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:49.650 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:49.650 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:49.650 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:49.650 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:49.650 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:49.650 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:49.650 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:49.650 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:49.650 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:49.650 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:49.650 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:49.650 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:49.650 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:49.650 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:49.650 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:49.650 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:49.650 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:49.650 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:49.650 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:49.650 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:49.650 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:49.650 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:49.650 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:49.650 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:49.650 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:49.650 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:49.650 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:49.650 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:49.650 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:49.650 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:49.650 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:49.650 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:49.650 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:49.650 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:49.650 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:49.650 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:49.650 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:49.650 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:49.650 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:49.650 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:49.650 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:49.650 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:49.650 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:49.650 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:49.650 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:49.650 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:49.650 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:49.650 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:49.650 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:49.650 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:49.650 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:49.650 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:49.910 16:46:19 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:04:49.910 16:46:19 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:49.910 00:04:49.910 real 0m43.973s 00:04:49.910 user 4m58.809s 00:04:49.910 sys 0m51.513s 00:04:49.910 16:46:19 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:49.910 16:46:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:49.910 ************************************ 00:04:49.910 END TEST build_native_dpdk 00:04:49.910 ************************************ 00:04:49.910 16:46:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:49.910 16:46:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:49.910 16:46:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:49.910 16:46:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:49.910 16:46:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:49.910 16:46:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:49.910 16:46:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:49.910 16:46:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:04:49.910 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:50.170 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:50.170 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:50.170 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:50.737 Using 'verbs' RDMA provider 00:05:06.569 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:21.481 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:22.050 Creating mk/config.mk...done. 00:05:22.050 Creating mk/cc.flags.mk...done. 00:05:22.050 Type 'make' to build. 00:05:22.050 16:46:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:22.050 16:46:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:22.050 16:46:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:22.050 16:46:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:22.050 ************************************ 00:05:22.050 START TEST make 00:05:22.050 ************************************ 00:05:22.050 16:46:51 make -- common/autotest_common.sh@1125 -- $ make -j10 00:05:22.620 make[1]: Nothing to be done for 'all'. 00:06:01.352 CC lib/ut_mock/mock.o 00:06:01.352 CC lib/log/log.o 00:06:01.352 CC lib/log/log_deprecated.o 00:06:01.352 CC lib/log/log_flags.o 00:06:01.352 CC lib/ut/ut.o 00:06:01.352 LIB libspdk_ut_mock.a 00:06:01.352 LIB libspdk_ut.a 00:06:01.352 LIB libspdk_log.a 00:06:01.352 SO libspdk_ut_mock.so.6.0 00:06:01.352 SO libspdk_ut.so.2.0 00:06:01.352 SO libspdk_log.so.7.0 00:06:01.352 SYMLINK libspdk_ut_mock.so 00:06:01.352 SYMLINK libspdk_ut.so 00:06:01.352 SYMLINK libspdk_log.so 00:06:01.611 CC lib/util/base64.o 00:06:01.611 CC lib/util/bit_array.o 00:06:01.611 CC lib/util/crc16.o 00:06:01.611 CC lib/util/crc32c.o 00:06:01.611 CC lib/util/cpuset.o 00:06:01.611 CC lib/util/crc32.o 00:06:01.611 CXX lib/trace_parser/trace.o 00:06:01.611 CC lib/ioat/ioat.o 00:06:01.611 CC lib/dma/dma.o 00:06:01.869 CC lib/vfio_user/host/vfio_user_pci.o 00:06:01.869 CC lib/vfio_user/host/vfio_user.o 00:06:01.869 CC lib/util/crc32_ieee.o 00:06:01.869 CC lib/util/crc64.o 00:06:01.869 CC lib/util/dif.o 00:06:01.869 LIB libspdk_dma.a 00:06:01.869 CC lib/util/fd.o 00:06:01.869 CC lib/util/fd_group.o 00:06:01.869 SO libspdk_dma.so.5.0 00:06:01.869 CC lib/util/file.o 00:06:01.869 CC lib/util/hexlify.o 00:06:01.869 SYMLINK libspdk_dma.so 00:06:01.869 CC lib/util/iov.o 00:06:01.869 LIB libspdk_ioat.a 00:06:02.127 CC lib/util/math.o 00:06:02.127 CC lib/util/net.o 00:06:02.127 SO libspdk_ioat.so.7.0 00:06:02.127 LIB libspdk_vfio_user.a 00:06:02.127 SO libspdk_vfio_user.so.5.0 00:06:02.127 SYMLINK libspdk_ioat.so 00:06:02.127 CC lib/util/pipe.o 00:06:02.127 CC lib/util/strerror_tls.o 00:06:02.127 CC lib/util/string.o 00:06:02.127 CC lib/util/uuid.o 00:06:02.127 SYMLINK libspdk_vfio_user.so 00:06:02.127 CC lib/util/xor.o 00:06:02.127 CC lib/util/zipf.o 00:06:02.127 CC lib/util/md5.o 00:06:02.384 LIB libspdk_util.a 00:06:02.641 SO libspdk_util.so.10.0 00:06:02.641 SYMLINK libspdk_util.so 00:06:02.641 LIB libspdk_trace_parser.a 00:06:02.641 SO libspdk_trace_parser.so.6.0 00:06:02.900 CC lib/conf/conf.o 00:06:02.900 CC lib/idxd/idxd.o 00:06:02.900 CC lib/idxd/idxd_user.o 00:06:02.900 CC lib/idxd/idxd_kernel.o 00:06:02.900 CC lib/json/json_parse.o 00:06:02.900 CC lib/rdma_utils/rdma_utils.o 00:06:02.900 SYMLINK libspdk_trace_parser.so 00:06:02.900 CC lib/env_dpdk/env.o 00:06:02.900 CC lib/vmd/vmd.o 00:06:02.900 CC lib/vmd/led.o 00:06:02.900 CC lib/rdma_provider/common.o 00:06:02.900 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:02.900 CC lib/json/json_util.o 00:06:03.158 CC lib/json/json_write.o 00:06:03.158 LIB libspdk_conf.a 00:06:03.158 SO libspdk_conf.so.6.0 00:06:03.158 CC lib/env_dpdk/memory.o 00:06:03.158 CC lib/env_dpdk/pci.o 00:06:03.158 LIB libspdk_rdma_utils.a 00:06:03.158 SYMLINK libspdk_conf.so 00:06:03.158 SO libspdk_rdma_utils.so.1.0 00:06:03.158 CC lib/env_dpdk/init.o 00:06:03.158 LIB libspdk_rdma_provider.a 00:06:03.158 SO libspdk_rdma_provider.so.6.0 00:06:03.158 SYMLINK libspdk_rdma_utils.so 00:06:03.158 CC lib/env_dpdk/threads.o 00:06:03.158 CC lib/env_dpdk/pci_ioat.o 00:06:03.158 SYMLINK libspdk_rdma_provider.so 00:06:03.158 CC lib/env_dpdk/pci_virtio.o 00:06:03.415 CC lib/env_dpdk/pci_vmd.o 00:06:03.415 LIB libspdk_json.a 00:06:03.415 CC lib/env_dpdk/pci_idxd.o 00:06:03.415 CC lib/env_dpdk/pci_event.o 00:06:03.415 SO libspdk_json.so.6.0 00:06:03.415 CC lib/env_dpdk/sigbus_handler.o 00:06:03.415 SYMLINK libspdk_json.so 00:06:03.415 CC lib/env_dpdk/pci_dpdk.o 00:06:03.415 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:03.415 LIB libspdk_idxd.a 00:06:03.415 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:03.415 SO libspdk_idxd.so.12.1 00:06:03.673 LIB libspdk_vmd.a 00:06:03.673 SYMLINK libspdk_idxd.so 00:06:03.673 SO libspdk_vmd.so.6.0 00:06:03.673 SYMLINK libspdk_vmd.so 00:06:03.673 CC lib/jsonrpc/jsonrpc_server.o 00:06:03.673 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:03.673 CC lib/jsonrpc/jsonrpc_client.o 00:06:03.673 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:03.932 LIB libspdk_jsonrpc.a 00:06:04.190 SO libspdk_jsonrpc.so.6.0 00:06:04.190 SYMLINK libspdk_jsonrpc.so 00:06:04.450 LIB libspdk_env_dpdk.a 00:06:04.450 CC lib/rpc/rpc.o 00:06:04.709 SO libspdk_env_dpdk.so.15.0 00:06:04.709 SYMLINK libspdk_env_dpdk.so 00:06:04.709 LIB libspdk_rpc.a 00:06:04.709 SO libspdk_rpc.so.6.0 00:06:04.967 SYMLINK libspdk_rpc.so 00:06:05.226 CC lib/trace/trace.o 00:06:05.226 CC lib/trace/trace_flags.o 00:06:05.226 CC lib/trace/trace_rpc.o 00:06:05.226 CC lib/notify/notify_rpc.o 00:06:05.226 CC lib/notify/notify.o 00:06:05.226 CC lib/keyring/keyring.o 00:06:05.226 CC lib/keyring/keyring_rpc.o 00:06:05.488 LIB libspdk_notify.a 00:06:05.488 SO libspdk_notify.so.6.0 00:06:05.488 LIB libspdk_keyring.a 00:06:05.488 SYMLINK libspdk_notify.so 00:06:05.488 LIB libspdk_trace.a 00:06:05.488 SO libspdk_keyring.so.2.0 00:06:05.488 SO libspdk_trace.so.11.0 00:06:05.747 SYMLINK libspdk_keyring.so 00:06:05.747 SYMLINK libspdk_trace.so 00:06:06.006 CC lib/thread/thread.o 00:06:06.006 CC lib/thread/iobuf.o 00:06:06.006 CC lib/sock/sock_rpc.o 00:06:06.006 CC lib/sock/sock.o 00:06:06.573 LIB libspdk_sock.a 00:06:06.573 SO libspdk_sock.so.10.0 00:06:06.573 SYMLINK libspdk_sock.so 00:06:07.141 CC lib/nvme/nvme_ctrlr.o 00:06:07.141 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:07.141 CC lib/nvme/nvme_fabric.o 00:06:07.141 CC lib/nvme/nvme_ns_cmd.o 00:06:07.141 CC lib/nvme/nvme_ns.o 00:06:07.141 CC lib/nvme/nvme_pcie_common.o 00:06:07.141 CC lib/nvme/nvme_pcie.o 00:06:07.141 CC lib/nvme/nvme.o 00:06:07.141 CC lib/nvme/nvme_qpair.o 00:06:07.708 LIB libspdk_thread.a 00:06:07.708 SO libspdk_thread.so.10.1 00:06:07.708 SYMLINK libspdk_thread.so 00:06:07.708 CC lib/nvme/nvme_quirks.o 00:06:07.708 CC lib/nvme/nvme_transport.o 00:06:07.708 CC lib/nvme/nvme_discovery.o 00:06:07.968 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:07.968 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:07.968 CC lib/nvme/nvme_tcp.o 00:06:07.968 CC lib/nvme/nvme_opal.o 00:06:07.968 CC lib/accel/accel.o 00:06:08.226 CC lib/nvme/nvme_io_msg.o 00:06:08.485 CC lib/blob/blobstore.o 00:06:08.485 CC lib/blob/request.o 00:06:08.485 CC lib/init/json_config.o 00:06:08.485 CC lib/init/subsystem.o 00:06:08.485 CC lib/virtio/virtio.o 00:06:08.485 CC lib/virtio/virtio_vhost_user.o 00:06:08.744 CC lib/init/subsystem_rpc.o 00:06:08.744 CC lib/fsdev/fsdev.o 00:06:08.744 CC lib/fsdev/fsdev_io.o 00:06:08.744 CC lib/accel/accel_rpc.o 00:06:09.002 CC lib/init/rpc.o 00:06:09.002 CC lib/virtio/virtio_vfio_user.o 00:06:09.002 CC lib/fsdev/fsdev_rpc.o 00:06:09.002 CC lib/nvme/nvme_poll_group.o 00:06:09.002 LIB libspdk_init.a 00:06:09.002 SO libspdk_init.so.6.0 00:06:09.002 CC lib/virtio/virtio_pci.o 00:06:09.261 CC lib/accel/accel_sw.o 00:06:09.261 SYMLINK libspdk_init.so 00:06:09.261 CC lib/blob/zeroes.o 00:06:09.261 CC lib/blob/blob_bs_dev.o 00:06:09.261 CC lib/event/app.o 00:06:09.261 CC lib/nvme/nvme_zns.o 00:06:09.261 CC lib/nvme/nvme_stubs.o 00:06:09.261 LIB libspdk_virtio.a 00:06:09.261 LIB libspdk_fsdev.a 00:06:09.519 SO libspdk_fsdev.so.1.0 00:06:09.519 SO libspdk_virtio.so.7.0 00:06:09.519 CC lib/event/reactor.o 00:06:09.519 LIB libspdk_accel.a 00:06:09.519 SYMLINK libspdk_fsdev.so 00:06:09.519 CC lib/event/log_rpc.o 00:06:09.519 SYMLINK libspdk_virtio.so 00:06:09.519 CC lib/event/app_rpc.o 00:06:09.519 SO libspdk_accel.so.16.0 00:06:09.519 SYMLINK libspdk_accel.so 00:06:09.519 CC lib/event/scheduler_static.o 00:06:09.519 CC lib/nvme/nvme_auth.o 00:06:09.778 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:09.778 CC lib/nvme/nvme_cuse.o 00:06:09.778 CC lib/nvme/nvme_rdma.o 00:06:09.778 CC lib/bdev/bdev.o 00:06:09.778 CC lib/bdev/bdev_rpc.o 00:06:09.778 CC lib/bdev/bdev_zone.o 00:06:09.778 CC lib/bdev/part.o 00:06:09.778 LIB libspdk_event.a 00:06:10.037 SO libspdk_event.so.14.0 00:06:10.037 CC lib/bdev/scsi_nvme.o 00:06:10.037 SYMLINK libspdk_event.so 00:06:10.296 LIB libspdk_fuse_dispatcher.a 00:06:10.296 SO libspdk_fuse_dispatcher.so.1.0 00:06:10.556 SYMLINK libspdk_fuse_dispatcher.so 00:06:11.157 LIB libspdk_nvme.a 00:06:11.416 SO libspdk_nvme.so.14.0 00:06:11.675 SYMLINK libspdk_nvme.so 00:06:11.675 LIB libspdk_blob.a 00:06:11.935 SO libspdk_blob.so.11.0 00:06:11.935 SYMLINK libspdk_blob.so 00:06:12.503 CC lib/lvol/lvol.o 00:06:12.503 CC lib/blobfs/blobfs.o 00:06:12.503 CC lib/blobfs/tree.o 00:06:12.503 LIB libspdk_bdev.a 00:06:12.503 SO libspdk_bdev.so.16.0 00:06:12.503 SYMLINK libspdk_bdev.so 00:06:12.763 CC lib/scsi/dev.o 00:06:12.763 CC lib/scsi/lun.o 00:06:12.763 CC lib/scsi/scsi.o 00:06:12.763 CC lib/scsi/port.o 00:06:12.763 CC lib/nvmf/ctrlr.o 00:06:12.763 CC lib/ublk/ublk.o 00:06:12.763 CC lib/ftl/ftl_core.o 00:06:12.763 CC lib/nbd/nbd.o 00:06:13.022 CC lib/nbd/nbd_rpc.o 00:06:13.022 CC lib/nvmf/ctrlr_discovery.o 00:06:13.022 CC lib/nvmf/ctrlr_bdev.o 00:06:13.281 CC lib/ftl/ftl_init.o 00:06:13.281 LIB libspdk_blobfs.a 00:06:13.281 CC lib/scsi/scsi_bdev.o 00:06:13.281 SO libspdk_blobfs.so.10.0 00:06:13.281 LIB libspdk_lvol.a 00:06:13.281 SYMLINK libspdk_blobfs.so 00:06:13.281 CC lib/scsi/scsi_pr.o 00:06:13.281 LIB libspdk_nbd.a 00:06:13.281 SO libspdk_lvol.so.10.0 00:06:13.281 CC lib/ftl/ftl_layout.o 00:06:13.281 SO libspdk_nbd.so.7.0 00:06:13.281 SYMLINK libspdk_lvol.so 00:06:13.281 CC lib/ftl/ftl_debug.o 00:06:13.281 CC lib/scsi/scsi_rpc.o 00:06:13.281 SYMLINK libspdk_nbd.so 00:06:13.281 CC lib/scsi/task.o 00:06:13.540 CC lib/nvmf/subsystem.o 00:06:13.540 CC lib/ftl/ftl_io.o 00:06:13.540 CC lib/nvmf/nvmf.o 00:06:13.540 CC lib/ublk/ublk_rpc.o 00:06:13.540 CC lib/ftl/ftl_sb.o 00:06:13.540 CC lib/ftl/ftl_l2p.o 00:06:13.540 CC lib/ftl/ftl_l2p_flat.o 00:06:13.807 LIB libspdk_scsi.a 00:06:13.807 LIB libspdk_ublk.a 00:06:13.807 SO libspdk_ublk.so.3.0 00:06:13.807 CC lib/ftl/ftl_nv_cache.o 00:06:13.807 CC lib/nvmf/nvmf_rpc.o 00:06:13.807 CC lib/nvmf/transport.o 00:06:13.807 CC lib/nvmf/tcp.o 00:06:13.807 SO libspdk_scsi.so.9.0 00:06:13.807 CC lib/ftl/ftl_band.o 00:06:13.807 SYMLINK libspdk_ublk.so 00:06:13.807 CC lib/ftl/ftl_band_ops.o 00:06:13.807 SYMLINK libspdk_scsi.so 00:06:13.807 CC lib/ftl/ftl_writer.o 00:06:14.065 CC lib/ftl/ftl_rq.o 00:06:14.065 CC lib/ftl/ftl_reloc.o 00:06:14.327 CC lib/ftl/ftl_l2p_cache.o 00:06:14.327 CC lib/iscsi/conn.o 00:06:14.587 CC lib/vhost/vhost.o 00:06:14.587 CC lib/nvmf/stubs.o 00:06:14.587 CC lib/vhost/vhost_rpc.o 00:06:14.587 CC lib/ftl/ftl_p2l.o 00:06:14.846 CC lib/nvmf/mdns_server.o 00:06:14.846 CC lib/nvmf/rdma.o 00:06:14.846 CC lib/nvmf/auth.o 00:06:14.846 CC lib/iscsi/init_grp.o 00:06:14.846 CC lib/iscsi/iscsi.o 00:06:15.105 CC lib/iscsi/param.o 00:06:15.105 CC lib/ftl/ftl_p2l_log.o 00:06:15.105 CC lib/vhost/vhost_scsi.o 00:06:15.105 CC lib/vhost/vhost_blk.o 00:06:15.105 CC lib/iscsi/portal_grp.o 00:06:15.105 CC lib/vhost/rte_vhost_user.o 00:06:15.364 CC lib/iscsi/tgt_node.o 00:06:15.364 CC lib/ftl/mngt/ftl_mngt.o 00:06:15.364 CC lib/iscsi/iscsi_subsystem.o 00:06:15.623 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:15.623 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:15.623 CC lib/iscsi/iscsi_rpc.o 00:06:15.623 CC lib/iscsi/task.o 00:06:15.623 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:15.881 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:15.881 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:15.881 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:15.881 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:15.881 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:16.140 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:16.140 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:16.140 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:16.140 LIB libspdk_vhost.a 00:06:16.140 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:16.140 CC lib/ftl/utils/ftl_conf.o 00:06:16.140 CC lib/ftl/utils/ftl_md.o 00:06:16.140 SO libspdk_vhost.so.8.0 00:06:16.140 SYMLINK libspdk_vhost.so 00:06:16.140 CC lib/ftl/utils/ftl_mempool.o 00:06:16.400 CC lib/ftl/utils/ftl_bitmap.o 00:06:16.400 CC lib/ftl/utils/ftl_property.o 00:06:16.400 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:16.400 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:16.400 LIB libspdk_iscsi.a 00:06:16.400 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:16.400 SO libspdk_iscsi.so.8.0 00:06:16.400 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:16.400 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:16.400 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:16.659 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:16.659 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:16.659 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:16.659 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:16.659 SYMLINK libspdk_iscsi.so 00:06:16.659 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:16.659 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:16.659 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:16.659 CC lib/ftl/base/ftl_base_dev.o 00:06:16.659 CC lib/ftl/base/ftl_base_bdev.o 00:06:16.659 CC lib/ftl/ftl_trace.o 00:06:16.917 LIB libspdk_ftl.a 00:06:16.917 LIB libspdk_nvmf.a 00:06:17.177 SO libspdk_nvmf.so.19.0 00:06:17.177 SO libspdk_ftl.so.9.0 00:06:17.436 SYMLINK libspdk_nvmf.so 00:06:17.436 SYMLINK libspdk_ftl.so 00:06:18.002 CC module/env_dpdk/env_dpdk_rpc.o 00:06:18.002 CC module/accel/dsa/accel_dsa.o 00:06:18.002 CC module/accel/ioat/accel_ioat.o 00:06:18.002 CC module/blob/bdev/blob_bdev.o 00:06:18.002 CC module/sock/posix/posix.o 00:06:18.002 CC module/accel/error/accel_error.o 00:06:18.002 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:18.002 CC module/fsdev/aio/fsdev_aio.o 00:06:18.002 CC module/keyring/file/keyring.o 00:06:18.003 CC module/accel/iaa/accel_iaa.o 00:06:18.003 LIB libspdk_env_dpdk_rpc.a 00:06:18.003 SO libspdk_env_dpdk_rpc.so.6.0 00:06:18.003 SYMLINK libspdk_env_dpdk_rpc.so 00:06:18.003 CC module/accel/iaa/accel_iaa_rpc.o 00:06:18.003 CC module/keyring/file/keyring_rpc.o 00:06:18.003 CC module/accel/ioat/accel_ioat_rpc.o 00:06:18.003 CC module/accel/error/accel_error_rpc.o 00:06:18.003 LIB libspdk_scheduler_dynamic.a 00:06:18.003 CC module/accel/dsa/accel_dsa_rpc.o 00:06:18.003 SO libspdk_scheduler_dynamic.so.4.0 00:06:18.261 LIB libspdk_keyring_file.a 00:06:18.261 LIB libspdk_accel_iaa.a 00:06:18.261 LIB libspdk_accel_ioat.a 00:06:18.261 LIB libspdk_blob_bdev.a 00:06:18.261 SYMLINK libspdk_scheduler_dynamic.so 00:06:18.261 SO libspdk_keyring_file.so.2.0 00:06:18.261 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:18.261 LIB libspdk_accel_error.a 00:06:18.261 SO libspdk_accel_iaa.so.3.0 00:06:18.261 SO libspdk_accel_ioat.so.6.0 00:06:18.261 SO libspdk_blob_bdev.so.11.0 00:06:18.261 SO libspdk_accel_error.so.2.0 00:06:18.261 SYMLINK libspdk_keyring_file.so 00:06:18.261 SYMLINK libspdk_accel_iaa.so 00:06:18.261 LIB libspdk_accel_dsa.a 00:06:18.261 SYMLINK libspdk_accel_ioat.so 00:06:18.261 CC module/fsdev/aio/linux_aio_mgr.o 00:06:18.261 SYMLINK libspdk_blob_bdev.so 00:06:18.262 SYMLINK libspdk_accel_error.so 00:06:18.262 SO libspdk_accel_dsa.so.5.0 00:06:18.262 SYMLINK libspdk_accel_dsa.so 00:06:18.262 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:18.521 CC module/keyring/linux/keyring.o 00:06:18.521 CC module/scheduler/gscheduler/gscheduler.o 00:06:18.521 CC module/keyring/linux/keyring_rpc.o 00:06:18.521 CC module/bdev/delay/vbdev_delay.o 00:06:18.521 CC module/bdev/gpt/gpt.o 00:06:18.521 CC module/bdev/error/vbdev_error.o 00:06:18.521 CC module/blobfs/bdev/blobfs_bdev.o 00:06:18.521 LIB libspdk_scheduler_dpdk_governor.a 00:06:18.521 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:18.521 LIB libspdk_fsdev_aio.a 00:06:18.521 CC module/bdev/error/vbdev_error_rpc.o 00:06:18.521 SO libspdk_fsdev_aio.so.1.0 00:06:18.521 LIB libspdk_scheduler_gscheduler.a 00:06:18.521 LIB libspdk_keyring_linux.a 00:06:18.521 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:18.521 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:18.521 SO libspdk_scheduler_gscheduler.so.4.0 00:06:18.521 SO libspdk_keyring_linux.so.1.0 00:06:18.521 SYMLINK libspdk_fsdev_aio.so 00:06:18.779 SYMLINK libspdk_scheduler_gscheduler.so 00:06:18.779 CC module/bdev/gpt/vbdev_gpt.o 00:06:18.779 SYMLINK libspdk_keyring_linux.so 00:06:18.779 LIB libspdk_sock_posix.a 00:06:18.779 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:18.779 SO libspdk_sock_posix.so.6.0 00:06:18.779 LIB libspdk_bdev_error.a 00:06:18.779 SYMLINK libspdk_sock_posix.so 00:06:18.779 SO libspdk_bdev_error.so.6.0 00:06:18.779 CC module/bdev/lvol/vbdev_lvol.o 00:06:18.779 CC module/bdev/malloc/bdev_malloc.o 00:06:18.779 CC module/bdev/null/bdev_null.o 00:06:18.779 CC module/bdev/nvme/bdev_nvme.o 00:06:18.779 SYMLINK libspdk_bdev_error.so 00:06:18.779 LIB libspdk_bdev_delay.a 00:06:18.779 LIB libspdk_blobfs_bdev.a 00:06:18.779 SO libspdk_bdev_delay.so.6.0 00:06:18.779 SO libspdk_blobfs_bdev.so.6.0 00:06:19.036 CC module/bdev/passthru/vbdev_passthru.o 00:06:19.037 CC module/bdev/raid/bdev_raid.o 00:06:19.037 LIB libspdk_bdev_gpt.a 00:06:19.037 SYMLINK libspdk_blobfs_bdev.so 00:06:19.037 SYMLINK libspdk_bdev_delay.so 00:06:19.037 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:19.037 SO libspdk_bdev_gpt.so.6.0 00:06:19.037 CC module/bdev/split/vbdev_split.o 00:06:19.037 SYMLINK libspdk_bdev_gpt.so 00:06:19.037 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:19.037 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:19.037 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:19.037 CC module/bdev/null/bdev_null_rpc.o 00:06:19.295 LIB libspdk_bdev_malloc.a 00:06:19.295 SO libspdk_bdev_malloc.so.6.0 00:06:19.295 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:19.295 CC module/bdev/split/vbdev_split_rpc.o 00:06:19.295 SYMLINK libspdk_bdev_malloc.so 00:06:19.295 CC module/bdev/nvme/nvme_rpc.o 00:06:19.295 LIB libspdk_bdev_null.a 00:06:19.295 SO libspdk_bdev_null.so.6.0 00:06:19.295 LIB libspdk_bdev_split.a 00:06:19.295 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:19.295 LIB libspdk_bdev_passthru.a 00:06:19.295 SO libspdk_bdev_split.so.6.0 00:06:19.295 SYMLINK libspdk_bdev_null.so 00:06:19.555 LIB libspdk_bdev_zone_block.a 00:06:19.555 SO libspdk_bdev_passthru.so.6.0 00:06:19.555 SO libspdk_bdev_zone_block.so.6.0 00:06:19.555 SYMLINK libspdk_bdev_split.so 00:06:19.555 CC module/bdev/aio/bdev_aio.o 00:06:19.555 CC module/bdev/aio/bdev_aio_rpc.o 00:06:19.555 SYMLINK libspdk_bdev_passthru.so 00:06:19.555 SYMLINK libspdk_bdev_zone_block.so 00:06:19.555 CC module/bdev/nvme/bdev_mdns_client.o 00:06:19.555 CC module/bdev/ftl/bdev_ftl.o 00:06:19.555 CC module/bdev/raid/bdev_raid_rpc.o 00:06:19.555 CC module/bdev/iscsi/bdev_iscsi.o 00:06:19.555 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:19.555 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:19.555 CC module/bdev/nvme/vbdev_opal.o 00:06:19.815 LIB libspdk_bdev_lvol.a 00:06:19.815 SO libspdk_bdev_lvol.so.6.0 00:06:19.815 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:19.815 LIB libspdk_bdev_aio.a 00:06:19.815 SYMLINK libspdk_bdev_lvol.so 00:06:19.815 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:19.815 CC module/bdev/raid/bdev_raid_sb.o 00:06:19.815 SO libspdk_bdev_aio.so.6.0 00:06:19.815 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:19.815 CC module/bdev/raid/raid0.o 00:06:19.815 SYMLINK libspdk_bdev_aio.so 00:06:19.815 CC module/bdev/raid/raid1.o 00:06:19.815 CC module/bdev/raid/concat.o 00:06:20.075 CC module/bdev/raid/raid5f.o 00:06:20.075 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:20.075 LIB libspdk_bdev_iscsi.a 00:06:20.075 LIB libspdk_bdev_ftl.a 00:06:20.075 SO libspdk_bdev_iscsi.so.6.0 00:06:20.075 SO libspdk_bdev_ftl.so.6.0 00:06:20.075 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:20.075 SYMLINK libspdk_bdev_iscsi.so 00:06:20.075 SYMLINK libspdk_bdev_ftl.so 00:06:20.337 LIB libspdk_bdev_virtio.a 00:06:20.337 SO libspdk_bdev_virtio.so.6.0 00:06:20.337 SYMLINK libspdk_bdev_virtio.so 00:06:20.337 LIB libspdk_bdev_raid.a 00:06:20.596 SO libspdk_bdev_raid.so.6.0 00:06:20.596 SYMLINK libspdk_bdev_raid.so 00:06:21.534 LIB libspdk_bdev_nvme.a 00:06:21.534 SO libspdk_bdev_nvme.so.7.0 00:06:21.534 SYMLINK libspdk_bdev_nvme.so 00:06:22.319 CC module/event/subsystems/keyring/keyring.o 00:06:22.319 CC module/event/subsystems/scheduler/scheduler.o 00:06:22.319 CC module/event/subsystems/fsdev/fsdev.o 00:06:22.319 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:22.319 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:22.319 CC module/event/subsystems/vmd/vmd.o 00:06:22.319 CC module/event/subsystems/iobuf/iobuf.o 00:06:22.319 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:22.319 CC module/event/subsystems/sock/sock.o 00:06:22.607 LIB libspdk_event_keyring.a 00:06:22.608 LIB libspdk_event_fsdev.a 00:06:22.608 LIB libspdk_event_vhost_blk.a 00:06:22.608 LIB libspdk_event_sock.a 00:06:22.608 LIB libspdk_event_vmd.a 00:06:22.608 LIB libspdk_event_scheduler.a 00:06:22.608 SO libspdk_event_keyring.so.1.0 00:06:22.608 SO libspdk_event_fsdev.so.1.0 00:06:22.608 SO libspdk_event_vhost_blk.so.3.0 00:06:22.608 SO libspdk_event_sock.so.5.0 00:06:22.608 LIB libspdk_event_iobuf.a 00:06:22.608 SO libspdk_event_vmd.so.6.0 00:06:22.608 SO libspdk_event_scheduler.so.4.0 00:06:22.608 SO libspdk_event_iobuf.so.3.0 00:06:22.608 SYMLINK libspdk_event_keyring.so 00:06:22.608 SYMLINK libspdk_event_fsdev.so 00:06:22.608 SYMLINK libspdk_event_sock.so 00:06:22.608 SYMLINK libspdk_event_vhost_blk.so 00:06:22.608 SYMLINK libspdk_event_vmd.so 00:06:22.608 SYMLINK libspdk_event_scheduler.so 00:06:22.608 SYMLINK libspdk_event_iobuf.so 00:06:22.867 CC module/event/subsystems/accel/accel.o 00:06:22.867 LIB libspdk_event_accel.a 00:06:22.867 SO libspdk_event_accel.so.6.0 00:06:23.126 SYMLINK libspdk_event_accel.so 00:06:23.385 CC module/event/subsystems/bdev/bdev.o 00:06:23.648 LIB libspdk_event_bdev.a 00:06:23.648 SO libspdk_event_bdev.so.6.0 00:06:23.648 SYMLINK libspdk_event_bdev.so 00:06:23.908 CC module/event/subsystems/nbd/nbd.o 00:06:23.908 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:23.908 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:23.908 CC module/event/subsystems/ublk/ublk.o 00:06:23.908 CC module/event/subsystems/scsi/scsi.o 00:06:24.168 LIB libspdk_event_ublk.a 00:06:24.168 LIB libspdk_event_nbd.a 00:06:24.168 LIB libspdk_event_scsi.a 00:06:24.168 SO libspdk_event_ublk.so.3.0 00:06:24.168 SO libspdk_event_scsi.so.6.0 00:06:24.168 SO libspdk_event_nbd.so.6.0 00:06:24.168 SYMLINK libspdk_event_ublk.so 00:06:24.168 SYMLINK libspdk_event_nbd.so 00:06:24.168 LIB libspdk_event_nvmf.a 00:06:24.168 SYMLINK libspdk_event_scsi.so 00:06:24.168 SO libspdk_event_nvmf.so.6.0 00:06:24.427 SYMLINK libspdk_event_nvmf.so 00:06:24.686 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:24.686 CC module/event/subsystems/iscsi/iscsi.o 00:06:24.686 LIB libspdk_event_vhost_scsi.a 00:06:24.686 LIB libspdk_event_iscsi.a 00:06:24.945 SO libspdk_event_vhost_scsi.so.3.0 00:06:24.945 SO libspdk_event_iscsi.so.6.0 00:06:24.945 SYMLINK libspdk_event_iscsi.so 00:06:24.945 SYMLINK libspdk_event_vhost_scsi.so 00:06:25.205 SO libspdk.so.6.0 00:06:25.205 SYMLINK libspdk.so 00:06:25.464 CC app/trace_record/trace_record.o 00:06:25.464 TEST_HEADER include/spdk/accel.h 00:06:25.464 CXX app/trace/trace.o 00:06:25.464 TEST_HEADER include/spdk/accel_module.h 00:06:25.464 TEST_HEADER include/spdk/assert.h 00:06:25.464 TEST_HEADER include/spdk/barrier.h 00:06:25.464 TEST_HEADER include/spdk/base64.h 00:06:25.464 TEST_HEADER include/spdk/bdev.h 00:06:25.464 TEST_HEADER include/spdk/bdev_module.h 00:06:25.464 TEST_HEADER include/spdk/bdev_zone.h 00:06:25.464 TEST_HEADER include/spdk/bit_array.h 00:06:25.464 TEST_HEADER include/spdk/bit_pool.h 00:06:25.464 TEST_HEADER include/spdk/blob_bdev.h 00:06:25.464 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:25.464 TEST_HEADER include/spdk/blobfs.h 00:06:25.464 TEST_HEADER include/spdk/blob.h 00:06:25.464 TEST_HEADER include/spdk/conf.h 00:06:25.464 TEST_HEADER include/spdk/config.h 00:06:25.464 TEST_HEADER include/spdk/cpuset.h 00:06:25.464 TEST_HEADER include/spdk/crc16.h 00:06:25.464 TEST_HEADER include/spdk/crc32.h 00:06:25.464 TEST_HEADER include/spdk/crc64.h 00:06:25.464 CC app/nvmf_tgt/nvmf_main.o 00:06:25.464 TEST_HEADER include/spdk/dif.h 00:06:25.464 TEST_HEADER include/spdk/dma.h 00:06:25.464 TEST_HEADER include/spdk/endian.h 00:06:25.464 TEST_HEADER include/spdk/env_dpdk.h 00:06:25.464 TEST_HEADER include/spdk/env.h 00:06:25.464 TEST_HEADER include/spdk/event.h 00:06:25.464 TEST_HEADER include/spdk/fd_group.h 00:06:25.464 TEST_HEADER include/spdk/fd.h 00:06:25.464 TEST_HEADER include/spdk/file.h 00:06:25.465 TEST_HEADER include/spdk/fsdev.h 00:06:25.465 TEST_HEADER include/spdk/fsdev_module.h 00:06:25.465 TEST_HEADER include/spdk/ftl.h 00:06:25.465 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:25.465 TEST_HEADER include/spdk/gpt_spec.h 00:06:25.465 TEST_HEADER include/spdk/hexlify.h 00:06:25.465 TEST_HEADER include/spdk/histogram_data.h 00:06:25.465 TEST_HEADER include/spdk/idxd.h 00:06:25.465 TEST_HEADER include/spdk/idxd_spec.h 00:06:25.465 TEST_HEADER include/spdk/init.h 00:06:25.465 CC examples/util/zipf/zipf.o 00:06:25.465 TEST_HEADER include/spdk/ioat.h 00:06:25.465 CC test/thread/poller_perf/poller_perf.o 00:06:25.465 TEST_HEADER include/spdk/ioat_spec.h 00:06:25.465 CC examples/ioat/perf/perf.o 00:06:25.465 TEST_HEADER include/spdk/iscsi_spec.h 00:06:25.465 TEST_HEADER include/spdk/json.h 00:06:25.465 TEST_HEADER include/spdk/jsonrpc.h 00:06:25.465 TEST_HEADER include/spdk/keyring.h 00:06:25.465 TEST_HEADER include/spdk/keyring_module.h 00:06:25.465 TEST_HEADER include/spdk/likely.h 00:06:25.465 TEST_HEADER include/spdk/log.h 00:06:25.465 TEST_HEADER include/spdk/lvol.h 00:06:25.465 TEST_HEADER include/spdk/md5.h 00:06:25.465 TEST_HEADER include/spdk/memory.h 00:06:25.465 TEST_HEADER include/spdk/mmio.h 00:06:25.465 TEST_HEADER include/spdk/nbd.h 00:06:25.465 TEST_HEADER include/spdk/net.h 00:06:25.465 TEST_HEADER include/spdk/notify.h 00:06:25.465 TEST_HEADER include/spdk/nvme.h 00:06:25.465 CC test/app/bdev_svc/bdev_svc.o 00:06:25.465 TEST_HEADER include/spdk/nvme_intel.h 00:06:25.465 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:25.465 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:25.465 CC test/dma/test_dma/test_dma.o 00:06:25.465 TEST_HEADER include/spdk/nvme_spec.h 00:06:25.465 TEST_HEADER include/spdk/nvme_zns.h 00:06:25.465 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:25.465 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:25.465 TEST_HEADER include/spdk/nvmf.h 00:06:25.465 TEST_HEADER include/spdk/nvmf_spec.h 00:06:25.465 TEST_HEADER include/spdk/nvmf_transport.h 00:06:25.465 TEST_HEADER include/spdk/opal.h 00:06:25.465 TEST_HEADER include/spdk/opal_spec.h 00:06:25.465 TEST_HEADER include/spdk/pci_ids.h 00:06:25.465 TEST_HEADER include/spdk/pipe.h 00:06:25.465 TEST_HEADER include/spdk/queue.h 00:06:25.465 CC test/env/mem_callbacks/mem_callbacks.o 00:06:25.465 TEST_HEADER include/spdk/reduce.h 00:06:25.465 TEST_HEADER include/spdk/rpc.h 00:06:25.465 TEST_HEADER include/spdk/scheduler.h 00:06:25.465 TEST_HEADER include/spdk/scsi.h 00:06:25.465 TEST_HEADER include/spdk/scsi_spec.h 00:06:25.465 TEST_HEADER include/spdk/sock.h 00:06:25.465 TEST_HEADER include/spdk/stdinc.h 00:06:25.465 TEST_HEADER include/spdk/string.h 00:06:25.465 TEST_HEADER include/spdk/thread.h 00:06:25.465 TEST_HEADER include/spdk/trace.h 00:06:25.465 TEST_HEADER include/spdk/trace_parser.h 00:06:25.465 TEST_HEADER include/spdk/tree.h 00:06:25.465 TEST_HEADER include/spdk/ublk.h 00:06:25.465 TEST_HEADER include/spdk/util.h 00:06:25.465 TEST_HEADER include/spdk/uuid.h 00:06:25.465 TEST_HEADER include/spdk/version.h 00:06:25.465 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:25.465 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:25.465 TEST_HEADER include/spdk/vhost.h 00:06:25.465 TEST_HEADER include/spdk/vmd.h 00:06:25.465 TEST_HEADER include/spdk/xor.h 00:06:25.724 TEST_HEADER include/spdk/zipf.h 00:06:25.724 CXX test/cpp_headers/accel.o 00:06:25.724 LINK nvmf_tgt 00:06:25.724 LINK zipf 00:06:25.724 LINK poller_perf 00:06:25.724 LINK spdk_trace_record 00:06:25.724 LINK bdev_svc 00:06:25.724 LINK ioat_perf 00:06:25.724 CXX test/cpp_headers/accel_module.o 00:06:25.724 LINK spdk_trace 00:06:25.724 CXX test/cpp_headers/assert.o 00:06:25.983 CC app/iscsi_tgt/iscsi_tgt.o 00:06:25.983 CXX test/cpp_headers/barrier.o 00:06:25.983 CC app/spdk_lspci/spdk_lspci.o 00:06:25.983 CC app/spdk_tgt/spdk_tgt.o 00:06:25.983 CC examples/ioat/verify/verify.o 00:06:25.983 CC app/spdk_nvme_perf/perf.o 00:06:25.983 CC app/spdk_nvme_identify/identify.o 00:06:25.983 LINK test_dma 00:06:25.983 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:25.983 LINK spdk_lspci 00:06:25.983 CXX test/cpp_headers/base64.o 00:06:25.983 LINK mem_callbacks 00:06:26.242 LINK iscsi_tgt 00:06:26.242 LINK spdk_tgt 00:06:26.242 LINK verify 00:06:26.242 CXX test/cpp_headers/bdev.o 00:06:26.243 CXX test/cpp_headers/bdev_module.o 00:06:26.243 CXX test/cpp_headers/bdev_zone.o 00:06:26.243 CC test/env/vtophys/vtophys.o 00:06:26.243 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:26.502 CC test/env/memory/memory_ut.o 00:06:26.502 LINK vtophys 00:06:26.502 CXX test/cpp_headers/bit_array.o 00:06:26.502 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:26.502 LINK nvme_fuzz 00:06:26.502 LINK env_dpdk_post_init 00:06:26.502 CC app/spdk_nvme_discover/discovery_aer.o 00:06:26.502 CC examples/thread/thread/thread_ex.o 00:06:26.502 CXX test/cpp_headers/bit_pool.o 00:06:26.502 LINK interrupt_tgt 00:06:26.761 CXX test/cpp_headers/blob_bdev.o 00:06:26.761 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:26.761 LINK spdk_nvme_discover 00:06:26.761 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:26.761 CC test/env/pci/pci_ut.o 00:06:26.761 CXX test/cpp_headers/blobfs_bdev.o 00:06:26.761 LINK thread 00:06:26.761 CXX test/cpp_headers/blobfs.o 00:06:26.761 LINK spdk_nvme_perf 00:06:26.761 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:27.020 CXX test/cpp_headers/blob.o 00:06:27.020 CXX test/cpp_headers/conf.o 00:06:27.020 CC app/spdk_top/spdk_top.o 00:06:27.020 LINK spdk_nvme_identify 00:06:27.020 CC test/app/histogram_perf/histogram_perf.o 00:06:27.020 CXX test/cpp_headers/config.o 00:06:27.020 CXX test/cpp_headers/cpuset.o 00:06:27.280 LINK pci_ut 00:06:27.280 CC examples/sock/hello_world/hello_sock.o 00:06:27.280 CC test/app/jsoncat/jsoncat.o 00:06:27.280 LINK histogram_perf 00:06:27.280 LINK vhost_fuzz 00:06:27.280 CC test/app/stub/stub.o 00:06:27.280 CXX test/cpp_headers/crc16.o 00:06:27.280 LINK jsoncat 00:06:27.539 CXX test/cpp_headers/crc32.o 00:06:27.539 LINK stub 00:06:27.539 LINK hello_sock 00:06:27.539 CC app/spdk_dd/spdk_dd.o 00:06:27.539 CC app/vhost/vhost.o 00:06:27.539 CC test/rpc_client/rpc_client_test.o 00:06:27.539 CC app/fio/nvme/fio_plugin.o 00:06:27.539 LINK memory_ut 00:06:27.539 CXX test/cpp_headers/crc64.o 00:06:27.539 CXX test/cpp_headers/dif.o 00:06:27.799 LINK rpc_client_test 00:06:27.799 LINK vhost 00:06:27.799 CXX test/cpp_headers/dma.o 00:06:27.799 CC examples/vmd/lsvmd/lsvmd.o 00:06:27.799 CC examples/vmd/led/led.o 00:06:27.799 LINK spdk_dd 00:06:27.799 LINK lsvmd 00:06:27.799 CXX test/cpp_headers/endian.o 00:06:28.058 LINK led 00:06:28.058 CC app/fio/bdev/fio_plugin.o 00:06:28.058 CC test/accel/dif/dif.o 00:06:28.058 CC test/blobfs/mkfs/mkfs.o 00:06:28.058 LINK spdk_top 00:06:28.058 CXX test/cpp_headers/env_dpdk.o 00:06:28.058 LINK spdk_nvme 00:06:28.058 CC examples/idxd/perf/perf.o 00:06:28.318 LINK mkfs 00:06:28.318 CXX test/cpp_headers/env.o 00:06:28.318 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:28.318 CC examples/accel/perf/accel_perf.o 00:06:28.318 CXX test/cpp_headers/event.o 00:06:28.318 CC examples/nvme/hello_world/hello_world.o 00:06:28.318 CC examples/blob/hello_world/hello_blob.o 00:06:28.318 LINK iscsi_fuzz 00:06:28.577 CC examples/nvme/reconnect/reconnect.o 00:06:28.578 LINK spdk_bdev 00:06:28.578 LINK idxd_perf 00:06:28.578 CXX test/cpp_headers/fd_group.o 00:06:28.578 LINK hello_fsdev 00:06:28.578 LINK hello_world 00:06:28.578 LINK hello_blob 00:06:28.578 CXX test/cpp_headers/fd.o 00:06:28.578 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:28.578 LINK dif 00:06:28.838 CC examples/nvme/arbitration/arbitration.o 00:06:28.838 CC examples/nvme/hotplug/hotplug.o 00:06:28.838 LINK reconnect 00:06:28.838 CXX test/cpp_headers/file.o 00:06:28.838 LINK accel_perf 00:06:28.838 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:28.838 CC examples/nvme/abort/abort.o 00:06:28.838 CC examples/blob/cli/blobcli.o 00:06:29.097 CXX test/cpp_headers/fsdev.o 00:06:29.097 LINK hotplug 00:06:29.097 CXX test/cpp_headers/fsdev_module.o 00:06:29.097 LINK arbitration 00:06:29.097 CC test/event/event_perf/event_perf.o 00:06:29.097 LINK cmb_copy 00:06:29.097 LINK event_perf 00:06:29.097 CXX test/cpp_headers/ftl.o 00:06:29.097 CC test/lvol/esnap/esnap.o 00:06:29.097 CC test/event/reactor/reactor.o 00:06:29.097 LINK nvme_manage 00:06:29.357 CC test/event/reactor_perf/reactor_perf.o 00:06:29.357 LINK abort 00:06:29.357 CC test/event/app_repeat/app_repeat.o 00:06:29.357 LINK reactor 00:06:29.357 CC test/event/scheduler/scheduler.o 00:06:29.357 CXX test/cpp_headers/fuse_dispatcher.o 00:06:29.357 LINK reactor_perf 00:06:29.357 LINK blobcli 00:06:29.357 LINK app_repeat 00:06:29.357 CXX test/cpp_headers/gpt_spec.o 00:06:29.357 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:29.616 CC examples/bdev/hello_world/hello_bdev.o 00:06:29.616 LINK scheduler 00:06:29.616 CXX test/cpp_headers/hexlify.o 00:06:29.616 CXX test/cpp_headers/histogram_data.o 00:06:29.616 CXX test/cpp_headers/idxd.o 00:06:29.616 CC test/nvme/aer/aer.o 00:06:29.616 LINK pmr_persistence 00:06:29.616 CC examples/bdev/bdevperf/bdevperf.o 00:06:29.617 CXX test/cpp_headers/idxd_spec.o 00:06:29.617 CC test/bdev/bdevio/bdevio.o 00:06:29.617 LINK hello_bdev 00:06:29.876 CXX test/cpp_headers/init.o 00:06:29.876 CXX test/cpp_headers/ioat.o 00:06:29.876 CC test/nvme/reset/reset.o 00:06:29.876 CC test/nvme/sgl/sgl.o 00:06:29.876 CXX test/cpp_headers/ioat_spec.o 00:06:29.876 LINK aer 00:06:29.876 CXX test/cpp_headers/iscsi_spec.o 00:06:29.876 CC test/nvme/overhead/overhead.o 00:06:29.876 CC test/nvme/e2edp/nvme_dp.o 00:06:30.136 CXX test/cpp_headers/json.o 00:06:30.136 LINK reset 00:06:30.136 LINK sgl 00:06:30.136 CC test/nvme/err_injection/err_injection.o 00:06:30.136 LINK bdevio 00:06:30.136 CC test/nvme/startup/startup.o 00:06:30.136 CXX test/cpp_headers/jsonrpc.o 00:06:30.136 LINK overhead 00:06:30.136 LINK nvme_dp 00:06:30.395 CC test/nvme/reserve/reserve.o 00:06:30.395 LINK err_injection 00:06:30.395 LINK startup 00:06:30.395 CXX test/cpp_headers/keyring.o 00:06:30.395 CC test/nvme/simple_copy/simple_copy.o 00:06:30.395 CC test/nvme/connect_stress/connect_stress.o 00:06:30.395 LINK bdevperf 00:06:30.395 CXX test/cpp_headers/keyring_module.o 00:06:30.395 CC test/nvme/boot_partition/boot_partition.o 00:06:30.395 LINK reserve 00:06:30.395 CC test/nvme/compliance/nvme_compliance.o 00:06:30.654 CC test/nvme/fused_ordering/fused_ordering.o 00:06:30.654 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:30.654 LINK connect_stress 00:06:30.654 LINK simple_copy 00:06:30.654 CXX test/cpp_headers/likely.o 00:06:30.654 LINK boot_partition 00:06:30.654 CC test/nvme/fdp/fdp.o 00:06:30.654 LINK fused_ordering 00:06:30.654 LINK doorbell_aers 00:06:30.654 CXX test/cpp_headers/log.o 00:06:30.654 CXX test/cpp_headers/lvol.o 00:06:30.654 CXX test/cpp_headers/md5.o 00:06:30.654 CC test/nvme/cuse/cuse.o 00:06:30.913 CC examples/nvmf/nvmf/nvmf.o 00:06:30.913 CXX test/cpp_headers/memory.o 00:06:30.913 CXX test/cpp_headers/mmio.o 00:06:30.913 LINK nvme_compliance 00:06:30.913 CXX test/cpp_headers/nbd.o 00:06:30.913 CXX test/cpp_headers/net.o 00:06:30.913 CXX test/cpp_headers/notify.o 00:06:30.913 CXX test/cpp_headers/nvme.o 00:06:30.913 CXX test/cpp_headers/nvme_intel.o 00:06:30.913 LINK fdp 00:06:30.913 CXX test/cpp_headers/nvme_ocssd.o 00:06:30.913 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:31.172 CXX test/cpp_headers/nvme_spec.o 00:06:31.172 CXX test/cpp_headers/nvme_zns.o 00:06:31.172 LINK nvmf 00:06:31.172 CXX test/cpp_headers/nvmf_cmd.o 00:06:31.172 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:31.173 CXX test/cpp_headers/nvmf.o 00:06:31.173 CXX test/cpp_headers/nvmf_spec.o 00:06:31.173 CXX test/cpp_headers/nvmf_transport.o 00:06:31.173 CXX test/cpp_headers/opal.o 00:06:31.173 CXX test/cpp_headers/opal_spec.o 00:06:31.173 CXX test/cpp_headers/pci_ids.o 00:06:31.173 CXX test/cpp_headers/pipe.o 00:06:31.499 CXX test/cpp_headers/queue.o 00:06:31.499 CXX test/cpp_headers/reduce.o 00:06:31.499 CXX test/cpp_headers/rpc.o 00:06:31.499 CXX test/cpp_headers/scheduler.o 00:06:31.499 CXX test/cpp_headers/scsi.o 00:06:31.499 CXX test/cpp_headers/scsi_spec.o 00:06:31.499 CXX test/cpp_headers/sock.o 00:06:31.499 CXX test/cpp_headers/stdinc.o 00:06:31.499 CXX test/cpp_headers/string.o 00:06:31.499 CXX test/cpp_headers/thread.o 00:06:31.499 CXX test/cpp_headers/trace.o 00:06:31.499 CXX test/cpp_headers/trace_parser.o 00:06:31.499 CXX test/cpp_headers/tree.o 00:06:31.499 CXX test/cpp_headers/ublk.o 00:06:31.499 CXX test/cpp_headers/util.o 00:06:31.499 CXX test/cpp_headers/uuid.o 00:06:31.499 CXX test/cpp_headers/version.o 00:06:31.499 CXX test/cpp_headers/vfio_user_pci.o 00:06:31.499 CXX test/cpp_headers/vfio_user_spec.o 00:06:31.759 CXX test/cpp_headers/vhost.o 00:06:31.759 CXX test/cpp_headers/vmd.o 00:06:31.759 CXX test/cpp_headers/xor.o 00:06:31.760 CXX test/cpp_headers/zipf.o 00:06:32.019 LINK cuse 00:06:34.556 LINK esnap 00:06:35.124 00:06:35.125 real 1m12.833s 00:06:35.125 user 5m35.495s 00:06:35.125 sys 1m11.232s 00:06:35.125 16:48:04 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:35.125 16:48:04 make -- common/autotest_common.sh@10 -- $ set +x 00:06:35.125 ************************************ 00:06:35.125 END TEST make 00:06:35.125 ************************************ 00:06:35.125 16:48:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:35.125 16:48:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:35.125 16:48:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:35.125 16:48:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.125 16:48:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:35.125 16:48:04 -- pm/common@44 -- $ pid=6199 00:06:35.125 16:48:04 -- pm/common@50 -- $ kill -TERM 6199 00:06:35.125 16:48:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.125 16:48:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:35.125 16:48:04 -- pm/common@44 -- $ pid=6201 00:06:35.125 16:48:04 -- pm/common@50 -- $ kill -TERM 6201 00:06:35.125 16:48:04 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.125 16:48:04 -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.125 16:48:04 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.125 16:48:04 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.125 16:48:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.125 16:48:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.125 16:48:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.125 16:48:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.125 16:48:04 -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.125 16:48:04 -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.125 16:48:04 -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.125 16:48:04 -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.125 16:48:04 -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.125 16:48:04 -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.125 16:48:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.125 16:48:04 -- scripts/common.sh@344 -- # case "$op" in 00:06:35.125 16:48:04 -- scripts/common.sh@345 -- # : 1 00:06:35.125 16:48:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.125 16:48:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.125 16:48:04 -- scripts/common.sh@365 -- # decimal 1 00:06:35.125 16:48:04 -- scripts/common.sh@353 -- # local d=1 00:06:35.125 16:48:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.125 16:48:04 -- scripts/common.sh@355 -- # echo 1 00:06:35.125 16:48:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.125 16:48:04 -- scripts/common.sh@366 -- # decimal 2 00:06:35.125 16:48:04 -- scripts/common.sh@353 -- # local d=2 00:06:35.125 16:48:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.125 16:48:04 -- scripts/common.sh@355 -- # echo 2 00:06:35.125 16:48:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.125 16:48:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.125 16:48:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.125 16:48:04 -- scripts/common.sh@368 -- # return 0 00:06:35.125 16:48:04 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.125 16:48:04 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.125 --rc genhtml_branch_coverage=1 00:06:35.125 --rc genhtml_function_coverage=1 00:06:35.125 --rc genhtml_legend=1 00:06:35.125 --rc geninfo_all_blocks=1 00:06:35.125 --rc geninfo_unexecuted_blocks=1 00:06:35.125 00:06:35.125 ' 00:06:35.125 16:48:04 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.125 --rc genhtml_branch_coverage=1 00:06:35.125 --rc genhtml_function_coverage=1 00:06:35.125 --rc genhtml_legend=1 00:06:35.125 --rc geninfo_all_blocks=1 00:06:35.125 --rc geninfo_unexecuted_blocks=1 00:06:35.125 00:06:35.125 ' 00:06:35.125 16:48:04 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.125 --rc genhtml_branch_coverage=1 00:06:35.125 --rc genhtml_function_coverage=1 00:06:35.125 --rc genhtml_legend=1 00:06:35.125 --rc geninfo_all_blocks=1 00:06:35.125 --rc geninfo_unexecuted_blocks=1 00:06:35.125 00:06:35.125 ' 00:06:35.125 16:48:04 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.125 --rc genhtml_branch_coverage=1 00:06:35.125 --rc genhtml_function_coverage=1 00:06:35.125 --rc genhtml_legend=1 00:06:35.125 --rc geninfo_all_blocks=1 00:06:35.125 --rc geninfo_unexecuted_blocks=1 00:06:35.125 00:06:35.125 ' 00:06:35.125 16:48:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:35.125 16:48:04 -- nvmf/common.sh@7 -- # uname -s 00:06:35.125 16:48:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.125 16:48:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.125 16:48:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.125 16:48:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.125 16:48:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.125 16:48:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.125 16:48:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.125 16:48:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.125 16:48:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.125 16:48:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.384 16:48:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a68b413-089f-4012-909f-922ea4c3e36c 00:06:35.384 16:48:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5a68b413-089f-4012-909f-922ea4c3e36c 00:06:35.384 16:48:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.384 16:48:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.384 16:48:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.384 16:48:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.384 16:48:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.384 16:48:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.384 16:48:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.384 16:48:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.384 16:48:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.384 16:48:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.384 16:48:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.384 16:48:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.384 16:48:04 -- paths/export.sh@5 -- # export PATH 00:06:35.384 16:48:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.384 16:48:04 -- nvmf/common.sh@51 -- # : 0 00:06:35.384 16:48:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.384 16:48:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.384 16:48:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.384 16:48:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.384 16:48:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.384 16:48:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.384 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.384 16:48:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.384 16:48:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.384 16:48:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.384 16:48:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:35.384 16:48:04 -- spdk/autotest.sh@32 -- # uname -s 00:06:35.384 16:48:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:35.384 16:48:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:35.384 16:48:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:35.384 16:48:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:35.384 16:48:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:35.384 16:48:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:35.384 16:48:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:35.384 16:48:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:35.384 16:48:04 -- spdk/autotest.sh@48 -- # udevadm_pid=66755 00:06:35.384 16:48:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:35.384 16:48:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:35.384 16:48:04 -- pm/common@17 -- # local monitor 00:06:35.385 16:48:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.385 16:48:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.385 16:48:04 -- pm/common@25 -- # sleep 1 00:06:35.385 16:48:04 -- pm/common@21 -- # date +%s 00:06:35.385 16:48:04 -- pm/common@21 -- # date +%s 00:06:35.385 16:48:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731084484 00:06:35.385 16:48:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731084484 00:06:35.385 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731084484_collect-vmstat.pm.log 00:06:35.385 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731084484_collect-cpu-load.pm.log 00:06:36.323 16:48:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:36.323 16:48:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:36.323 16:48:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.323 16:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:36.323 16:48:05 -- spdk/autotest.sh@59 -- # create_test_list 00:06:36.323 16:48:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:36.323 16:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:36.323 16:48:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:36.323 16:48:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:36.323 16:48:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:36.323 16:48:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:36.323 16:48:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:36.323 16:48:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:36.323 16:48:05 -- common/autotest_common.sh@1455 -- # uname 00:06:36.583 16:48:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:36.583 16:48:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:36.583 16:48:05 -- common/autotest_common.sh@1475 -- # uname 00:06:36.583 16:48:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:36.583 16:48:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:36.583 16:48:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:36.583 lcov: LCOV version 1.15 00:06:36.583 16:48:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:51.465 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:51.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:06.363 16:48:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:06.363 16:48:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.363 16:48:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.363 16:48:34 -- spdk/autotest.sh@78 -- # rm -f 00:07:06.363 16:48:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:06.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.363 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:06.363 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:06.363 16:48:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:06.363 16:48:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:06.363 16:48:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:06.363 16:48:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:06.363 16:48:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.363 16:48:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:06.363 16:48:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:06.363 16:48:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:06.363 16:48:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.363 16:48:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.363 16:48:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:06.363 16:48:34 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:06.363 16:48:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:06.363 16:48:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.363 16:48:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.363 16:48:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:07:06.363 16:48:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:07:06.363 16:48:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:06.363 16:48:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.363 16:48:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.363 16:48:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:07:06.363 16:48:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:07:06.363 16:48:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:06.363 16:48:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.363 16:48:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:06.363 16:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:06.363 16:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:06.363 16:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:06.363 16:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:06.363 16:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:06.363 No valid GPT data, bailing 00:07:06.363 16:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:06.363 16:48:35 -- scripts/common.sh@394 -- # pt= 00:07:06.363 16:48:35 -- scripts/common.sh@395 -- # return 1 00:07:06.363 16:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:06.363 1+0 records in 00:07:06.363 1+0 records out 00:07:06.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608418 s, 172 MB/s 00:07:06.363 16:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:06.363 16:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:06.363 16:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:06.363 16:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:06.363 16:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:06.363 No valid GPT data, bailing 00:07:06.363 16:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:06.363 16:48:35 -- scripts/common.sh@394 -- # pt= 00:07:06.363 16:48:35 -- scripts/common.sh@395 -- # return 1 00:07:06.363 16:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:06.363 1+0 records in 00:07:06.363 1+0 records out 00:07:06.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615857 s, 170 MB/s 00:07:06.363 16:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:06.363 16:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:06.363 16:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:06.363 16:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:06.363 16:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:06.363 No valid GPT data, bailing 00:07:06.363 16:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:06.363 16:48:35 -- scripts/common.sh@394 -- # pt= 00:07:06.363 16:48:35 -- scripts/common.sh@395 -- # return 1 00:07:06.363 16:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:06.363 1+0 records in 00:07:06.364 1+0 records out 00:07:06.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655885 s, 160 MB/s 00:07:06.364 16:48:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:06.364 16:48:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:06.364 16:48:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:06.364 16:48:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:06.364 16:48:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:06.364 No valid GPT data, bailing 00:07:06.364 16:48:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:06.364 16:48:35 -- scripts/common.sh@394 -- # pt= 00:07:06.364 16:48:35 -- scripts/common.sh@395 -- # return 1 00:07:06.364 16:48:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:06.364 1+0 records in 00:07:06.364 1+0 records out 00:07:06.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459411 s, 228 MB/s 00:07:06.364 16:48:35 -- spdk/autotest.sh@105 -- # sync 00:07:06.364 16:48:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:06.364 16:48:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:06.364 16:48:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:08.902 16:48:38 -- spdk/autotest.sh@111 -- # uname -s 00:07:08.902 16:48:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:08.902 16:48:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:08.902 16:48:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:09.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.472 Hugepages 00:07:09.472 node hugesize free / total 00:07:09.472 node0 1048576kB 0 / 0 00:07:09.472 node0 2048kB 0 / 0 00:07:09.472 00:07:09.472 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:09.731 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:09.731 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:09.991 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:09.991 16:48:39 -- spdk/autotest.sh@117 -- # uname -s 00:07:09.991 16:48:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:09.991 16:48:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:09.991 16:48:39 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.820 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.820 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.820 16:48:40 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:11.760 16:48:41 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:11.760 16:48:41 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:11.760 16:48:41 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:11.760 16:48:41 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:11.760 16:48:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:11.760 16:48:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:11.760 16:48:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:11.760 16:48:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:11.760 16:48:41 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:12.019 16:48:41 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:07:12.019 16:48:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:12.019 16:48:41 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:12.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.538 Waiting for block devices as requested 00:07:12.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.538 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.798 16:48:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:12.798 16:48:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:07:12.798 16:48:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:07:12.798 16:48:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:12.798 16:48:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:12.798 16:48:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1541 -- # continue 00:07:12.798 16:48:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:12.798 16:48:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:12.798 16:48:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:07:12.798 16:48:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:12.798 16:48:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:07:12.798 16:48:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:12.798 16:48:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:12.798 16:48:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:12.798 16:48:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:12.798 16:48:42 -- common/autotest_common.sh@1541 -- # continue 00:07:12.798 16:48:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:12.798 16:48:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.798 16:48:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.798 16:48:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:12.798 16:48:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.798 16:48:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.798 16:48:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:13.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.741 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.741 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.741 16:48:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:13.741 16:48:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.741 16:48:43 -- common/autotest_common.sh@10 -- # set +x 00:07:14.005 16:48:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:14.005 16:48:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:14.005 16:48:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:14.005 16:48:43 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:14.005 16:48:43 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:14.005 16:48:43 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:14.005 16:48:43 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:14.005 16:48:43 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:14.005 16:48:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:14.005 16:48:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:14.005 16:48:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:14.005 16:48:43 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:14.005 16:48:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:14.005 16:48:43 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:07:14.005 16:48:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:14.005 16:48:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:14.005 16:48:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:14.005 16:48:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:07:14.005 16:48:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:14.005 16:48:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:14.005 16:48:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:14.005 16:48:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:07:14.005 16:48:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:14.005 16:48:43 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:07:14.005 16:48:43 -- common/autotest_common.sh@1570 -- # return 0 00:07:14.005 16:48:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:14.005 16:48:43 -- common/autotest_common.sh@1578 -- # return 0 00:07:14.005 16:48:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:14.005 16:48:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:14.005 16:48:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:14.005 16:48:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:14.005 16:48:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:14.005 16:48:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.005 16:48:43 -- common/autotest_common.sh@10 -- # set +x 00:07:14.005 16:48:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:14.005 16:48:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:14.005 16:48:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.005 16:48:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.005 16:48:43 -- common/autotest_common.sh@10 -- # set +x 00:07:14.005 ************************************ 00:07:14.005 START TEST env 00:07:14.005 ************************************ 00:07:14.005 16:48:43 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:14.005 * Looking for test storage... 00:07:14.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:14.005 16:48:43 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:14.005 16:48:43 env -- common/autotest_common.sh@1681 -- # lcov --version 00:07:14.005 16:48:43 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:14.265 16:48:43 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:14.265 16:48:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.265 16:48:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.265 16:48:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.265 16:48:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.265 16:48:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.265 16:48:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.265 16:48:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.265 16:48:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.265 16:48:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.265 16:48:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.265 16:48:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.265 16:48:43 env -- scripts/common.sh@344 -- # case "$op" in 00:07:14.265 16:48:43 env -- scripts/common.sh@345 -- # : 1 00:07:14.265 16:48:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.265 16:48:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.265 16:48:43 env -- scripts/common.sh@365 -- # decimal 1 00:07:14.265 16:48:43 env -- scripts/common.sh@353 -- # local d=1 00:07:14.265 16:48:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.265 16:48:43 env -- scripts/common.sh@355 -- # echo 1 00:07:14.265 16:48:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.265 16:48:43 env -- scripts/common.sh@366 -- # decimal 2 00:07:14.265 16:48:43 env -- scripts/common.sh@353 -- # local d=2 00:07:14.265 16:48:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.265 16:48:43 env -- scripts/common.sh@355 -- # echo 2 00:07:14.265 16:48:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.265 16:48:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.265 16:48:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.265 16:48:43 env -- scripts/common.sh@368 -- # return 0 00:07:14.265 16:48:43 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.265 16:48:43 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:14.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.265 --rc genhtml_branch_coverage=1 00:07:14.265 --rc genhtml_function_coverage=1 00:07:14.265 --rc genhtml_legend=1 00:07:14.265 --rc geninfo_all_blocks=1 00:07:14.265 --rc geninfo_unexecuted_blocks=1 00:07:14.265 00:07:14.265 ' 00:07:14.265 16:48:43 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:14.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.265 --rc genhtml_branch_coverage=1 00:07:14.265 --rc genhtml_function_coverage=1 00:07:14.265 --rc genhtml_legend=1 00:07:14.265 --rc geninfo_all_blocks=1 00:07:14.265 --rc geninfo_unexecuted_blocks=1 00:07:14.265 00:07:14.265 ' 00:07:14.265 16:48:43 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:14.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.266 --rc genhtml_branch_coverage=1 00:07:14.266 --rc genhtml_function_coverage=1 00:07:14.266 --rc genhtml_legend=1 00:07:14.266 --rc geninfo_all_blocks=1 00:07:14.266 --rc geninfo_unexecuted_blocks=1 00:07:14.266 00:07:14.266 ' 00:07:14.266 16:48:43 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:14.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.266 --rc genhtml_branch_coverage=1 00:07:14.266 --rc genhtml_function_coverage=1 00:07:14.266 --rc genhtml_legend=1 00:07:14.266 --rc geninfo_all_blocks=1 00:07:14.266 --rc geninfo_unexecuted_blocks=1 00:07:14.266 00:07:14.266 ' 00:07:14.266 16:48:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:14.266 16:48:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.266 16:48:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.266 16:48:43 env -- common/autotest_common.sh@10 -- # set +x 00:07:14.266 ************************************ 00:07:14.266 START TEST env_memory 00:07:14.266 ************************************ 00:07:14.266 16:48:43 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:14.266 00:07:14.266 00:07:14.266 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.266 http://cunit.sourceforge.net/ 00:07:14.266 00:07:14.266 00:07:14.266 Suite: memory 00:07:14.266 Test: alloc and free memory map ...[2024-11-08 16:48:43.705387] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:14.266 passed 00:07:14.266 Test: mem map translation ...[2024-11-08 16:48:43.745688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:14.266 [2024-11-08 16:48:43.745731] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:14.266 [2024-11-08 16:48:43.745804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:14.266 [2024-11-08 16:48:43.745823] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:14.526 passed 00:07:14.526 Test: mem map registration ...[2024-11-08 16:48:43.806578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:14.526 [2024-11-08 16:48:43.806614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:14.526 passed 00:07:14.526 Test: mem map adjacent registrations ...passed 00:07:14.526 00:07:14.526 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.526 suites 1 1 n/a 0 0 00:07:14.526 tests 4 4 4 0 0 00:07:14.526 asserts 152 152 152 0 n/a 00:07:14.526 00:07:14.526 Elapsed time = 0.226 seconds 00:07:14.526 00:07:14.526 real 0m0.282s 00:07:14.526 user 0m0.245s 00:07:14.526 sys 0m0.028s 00:07:14.526 16:48:43 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.526 16:48:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:14.526 ************************************ 00:07:14.526 END TEST env_memory 00:07:14.526 ************************************ 00:07:14.526 16:48:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:14.526 16:48:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.526 16:48:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.526 16:48:43 env -- common/autotest_common.sh@10 -- # set +x 00:07:14.526 ************************************ 00:07:14.526 START TEST env_vtophys 00:07:14.526 ************************************ 00:07:14.526 16:48:43 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:14.526 EAL: lib.eal log level changed from notice to debug 00:07:14.526 EAL: Detected lcore 0 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 1 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 2 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 3 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 4 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 5 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 6 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 7 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 8 as core 0 on socket 0 00:07:14.526 EAL: Detected lcore 9 as core 0 on socket 0 00:07:14.526 EAL: Maximum logical cores by configuration: 128 00:07:14.526 EAL: Detected CPU lcores: 10 00:07:14.526 EAL: Detected NUMA nodes: 1 00:07:14.527 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:14.527 EAL: Detected shared linkage of DPDK 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:07:14.527 EAL: Registered [vdev] bus. 00:07:14.527 EAL: bus.vdev log level changed from disabled to notice 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:07:14.527 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:07:14.527 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:07:14.527 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:07:14.527 EAL: No shared files mode enabled, IPC will be disabled 00:07:14.527 EAL: No shared files mode enabled, IPC is disabled 00:07:14.527 EAL: Selected IOVA mode 'PA' 00:07:14.527 EAL: Probing VFIO support... 00:07:14.527 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:14.527 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:14.527 EAL: Ask a virtual area of 0x2e000 bytes 00:07:14.527 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:14.527 EAL: Setting up physically contiguous memory... 00:07:14.527 EAL: Setting maximum number of open files to 524288 00:07:14.527 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:14.527 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:14.527 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.527 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:14.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.527 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.527 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:14.527 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:14.527 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.527 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:14.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.527 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.527 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:14.527 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:14.527 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.527 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:14.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.527 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.527 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:14.527 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:14.527 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.527 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:14.527 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.527 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.527 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:14.527 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:14.527 EAL: Hugepages will be freed exactly as allocated. 00:07:14.527 EAL: No shared files mode enabled, IPC is disabled 00:07:14.527 EAL: No shared files mode enabled, IPC is disabled 00:07:14.786 EAL: TSC frequency is ~2290000 KHz 00:07:14.786 EAL: Main lcore 0 is ready (tid=7efe07a37a40;cpuset=[0]) 00:07:14.786 EAL: Trying to obtain current memory policy. 00:07:14.786 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.786 EAL: Restoring previous memory policy: 0 00:07:14.786 EAL: request: mp_malloc_sync 00:07:14.786 EAL: No shared files mode enabled, IPC is disabled 00:07:14.786 EAL: Heap on socket 0 was expanded by 2MB 00:07:14.786 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:14.786 EAL: No shared files mode enabled, IPC is disabled 00:07:14.786 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:14.786 EAL: Mem event callback 'spdk:(nil)' registered 00:07:14.786 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:14.786 00:07:14.786 00:07:14.786 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.786 http://cunit.sourceforge.net/ 00:07:14.786 00:07:14.786 00:07:14.786 Suite: components_suite 00:07:15.046 Test: vtophys_malloc_test ...passed 00:07:15.046 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.046 EAL: Restoring previous memory policy: 4 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was expanded by 4MB 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was shrunk by 4MB 00:07:15.046 EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.046 EAL: Restoring previous memory policy: 4 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was expanded by 6MB 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was shrunk by 6MB 00:07:15.046 EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.046 EAL: Restoring previous memory policy: 4 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was expanded by 10MB 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was shrunk by 10MB 00:07:15.046 EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.046 EAL: Restoring previous memory policy: 4 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was expanded by 18MB 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was shrunk by 18MB 00:07:15.046 EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.046 EAL: Restoring previous memory policy: 4 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was expanded by 34MB 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was shrunk by 34MB 00:07:15.046 EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.046 EAL: Restoring previous memory policy: 4 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was expanded by 66MB 00:07:15.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.046 EAL: request: mp_malloc_sync 00:07:15.046 EAL: No shared files mode enabled, IPC is disabled 00:07:15.046 EAL: Heap on socket 0 was shrunk by 66MB 00:07:15.046 EAL: Trying to obtain current memory policy. 00:07:15.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.307 EAL: Restoring previous memory policy: 4 00:07:15.307 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.307 EAL: request: mp_malloc_sync 00:07:15.307 EAL: No shared files mode enabled, IPC is disabled 00:07:15.307 EAL: Heap on socket 0 was expanded by 130MB 00:07:15.307 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.307 EAL: request: mp_malloc_sync 00:07:15.307 EAL: No shared files mode enabled, IPC is disabled 00:07:15.307 EAL: Heap on socket 0 was shrunk by 130MB 00:07:15.307 EAL: Trying to obtain current memory policy. 00:07:15.307 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.307 EAL: Restoring previous memory policy: 4 00:07:15.307 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.307 EAL: request: mp_malloc_sync 00:07:15.307 EAL: No shared files mode enabled, IPC is disabled 00:07:15.307 EAL: Heap on socket 0 was expanded by 258MB 00:07:15.307 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.307 EAL: request: mp_malloc_sync 00:07:15.307 EAL: No shared files mode enabled, IPC is disabled 00:07:15.307 EAL: Heap on socket 0 was shrunk by 258MB 00:07:15.307 EAL: Trying to obtain current memory policy. 00:07:15.307 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.566 EAL: Restoring previous memory policy: 4 00:07:15.566 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.566 EAL: request: mp_malloc_sync 00:07:15.566 EAL: No shared files mode enabled, IPC is disabled 00:07:15.566 EAL: Heap on socket 0 was expanded by 514MB 00:07:15.566 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.566 EAL: request: mp_malloc_sync 00:07:15.566 EAL: No shared files mode enabled, IPC is disabled 00:07:15.566 EAL: Heap on socket 0 was shrunk by 514MB 00:07:15.566 EAL: Trying to obtain current memory policy. 00:07:15.566 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.826 EAL: Restoring previous memory policy: 4 00:07:15.826 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.826 EAL: request: mp_malloc_sync 00:07:15.826 EAL: No shared files mode enabled, IPC is disabled 00:07:15.826 EAL: Heap on socket 0 was expanded by 1026MB 00:07:16.085 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.085 passed 00:07:16.085 00:07:16.085 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.085 suites 1 1 n/a 0 0 00:07:16.085 tests 2 2 2 0 0 00:07:16.085 asserts 5806 5806 5806 0 n/a 00:07:16.085 00:07:16.085 Elapsed time = 1.377 seconds 00:07:16.085 EAL: request: mp_malloc_sync 00:07:16.085 EAL: No shared files mode enabled, IPC is disabled 00:07:16.085 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:16.085 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.085 EAL: request: mp_malloc_sync 00:07:16.085 EAL: No shared files mode enabled, IPC is disabled 00:07:16.085 EAL: Heap on socket 0 was shrunk by 2MB 00:07:16.085 EAL: No shared files mode enabled, IPC is disabled 00:07:16.085 EAL: No shared files mode enabled, IPC is disabled 00:07:16.085 EAL: No shared files mode enabled, IPC is disabled 00:07:16.085 00:07:16.085 real 0m1.638s 00:07:16.085 user 0m0.770s 00:07:16.085 sys 0m0.739s 00:07:16.085 16:48:45 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.085 16:48:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 ************************************ 00:07:16.085 END TEST env_vtophys 00:07:16.085 ************************************ 00:07:16.346 16:48:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:16.346 16:48:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.346 16:48:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.346 16:48:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.346 ************************************ 00:07:16.346 START TEST env_pci 00:07:16.346 ************************************ 00:07:16.346 16:48:45 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:16.346 00:07:16.346 00:07:16.346 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.346 http://cunit.sourceforge.net/ 00:07:16.346 00:07:16.346 00:07:16.346 Suite: pci 00:07:16.346 Test: pci_hook ...[2024-11-08 16:48:45.715260] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68982 has claimed it 00:07:16.346 passed 00:07:16.346 00:07:16.346 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.346 suites 1 1 n/a 0 0 00:07:16.346 tests 1 1 1 0 0 00:07:16.346 asserts 25 25 25 0 n/a 00:07:16.346 00:07:16.346 Elapsed time = 0.004 seconds 00:07:16.346 EAL: Cannot find device (10000:00:01.0) 00:07:16.346 EAL: Failed to attach device on primary process 00:07:16.346 00:07:16.346 real 0m0.087s 00:07:16.346 user 0m0.045s 00:07:16.346 sys 0m0.041s 00:07:16.346 16:48:45 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.346 16:48:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:16.346 ************************************ 00:07:16.346 END TEST env_pci 00:07:16.346 ************************************ 00:07:16.346 16:48:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:16.346 16:48:45 env -- env/env.sh@15 -- # uname 00:07:16.346 16:48:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:16.346 16:48:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:16.346 16:48:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:16.346 16:48:45 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.346 16:48:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.346 16:48:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.346 ************************************ 00:07:16.346 START TEST env_dpdk_post_init 00:07:16.346 ************************************ 00:07:16.346 16:48:45 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:16.606 EAL: Detected CPU lcores: 10 00:07:16.606 EAL: Detected NUMA nodes: 1 00:07:16.606 EAL: Detected shared linkage of DPDK 00:07:16.606 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:16.606 EAL: Selected IOVA mode 'PA' 00:07:16.606 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:16.606 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:16.606 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:16.606 Starting DPDK initialization... 00:07:16.606 Starting SPDK post initialization... 00:07:16.606 SPDK NVMe probe 00:07:16.606 Attaching to 0000:00:10.0 00:07:16.606 Attaching to 0000:00:11.0 00:07:16.606 Attached to 0000:00:10.0 00:07:16.606 Attached to 0000:00:11.0 00:07:16.606 Cleaning up... 00:07:16.606 00:07:16.606 real 0m0.251s 00:07:16.606 user 0m0.072s 00:07:16.606 sys 0m0.079s 00:07:16.606 16:48:46 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.606 16:48:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:16.606 ************************************ 00:07:16.606 END TEST env_dpdk_post_init 00:07:16.606 ************************************ 00:07:16.866 16:48:46 env -- env/env.sh@26 -- # uname 00:07:16.866 16:48:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:16.866 16:48:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:16.866 16:48:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.866 16:48:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.866 16:48:46 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.866 ************************************ 00:07:16.866 START TEST env_mem_callbacks 00:07:16.866 ************************************ 00:07:16.866 16:48:46 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:16.866 EAL: Detected CPU lcores: 10 00:07:16.866 EAL: Detected NUMA nodes: 1 00:07:16.866 EAL: Detected shared linkage of DPDK 00:07:16.866 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:16.866 EAL: Selected IOVA mode 'PA' 00:07:16.866 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:16.866 00:07:16.866 00:07:16.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.866 http://cunit.sourceforge.net/ 00:07:16.866 00:07:16.866 00:07:16.866 Suite: memory 00:07:16.866 Test: test ... 00:07:16.866 register 0x200000200000 2097152 00:07:16.866 malloc 3145728 00:07:16.866 register 0x200000400000 4194304 00:07:16.866 buf 0x200000500000 len 3145728 PASSED 00:07:16.866 malloc 64 00:07:16.866 buf 0x2000004fff40 len 64 PASSED 00:07:16.866 malloc 4194304 00:07:16.866 register 0x200000800000 6291456 00:07:16.866 buf 0x200000a00000 len 4194304 PASSED 00:07:16.866 free 0x200000500000 3145728 00:07:16.866 free 0x2000004fff40 64 00:07:16.866 unregister 0x200000400000 4194304 PASSED 00:07:16.866 free 0x200000a00000 4194304 00:07:16.866 unregister 0x200000800000 6291456 PASSED 00:07:16.866 malloc 8388608 00:07:16.866 register 0x200000400000 10485760 00:07:16.866 buf 0x200000600000 len 8388608 PASSED 00:07:16.866 free 0x200000600000 8388608 00:07:16.866 unregister 0x200000400000 10485760 PASSED 00:07:16.866 passed 00:07:16.866 00:07:16.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.866 suites 1 1 n/a 0 0 00:07:16.866 tests 1 1 1 0 0 00:07:16.866 asserts 15 15 15 0 n/a 00:07:16.866 00:07:16.866 Elapsed time = 0.011 seconds 00:07:16.866 00:07:16.866 real 0m0.201s 00:07:16.866 user 0m0.035s 00:07:16.866 sys 0m0.065s 00:07:16.866 16:48:46 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.866 16:48:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:16.866 ************************************ 00:07:16.866 END TEST env_mem_callbacks 00:07:16.866 ************************************ 00:07:17.126 00:07:17.126 real 0m3.016s 00:07:17.126 user 0m1.392s 00:07:17.126 sys 0m1.297s 00:07:17.126 16:48:46 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.126 16:48:46 env -- common/autotest_common.sh@10 -- # set +x 00:07:17.126 ************************************ 00:07:17.126 END TEST env 00:07:17.126 ************************************ 00:07:17.126 16:48:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:17.126 16:48:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.126 16:48:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.126 16:48:46 -- common/autotest_common.sh@10 -- # set +x 00:07:17.126 ************************************ 00:07:17.126 START TEST rpc 00:07:17.126 ************************************ 00:07:17.126 16:48:46 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:17.126 * Looking for test storage... 00:07:17.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:17.126 16:48:46 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:17.126 16:48:46 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:17.126 16:48:46 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:17.385 16:48:46 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.386 16:48:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.386 16:48:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.386 16:48:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.386 16:48:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.386 16:48:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.386 16:48:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:17.386 16:48:46 rpc -- scripts/common.sh@345 -- # : 1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.386 16:48:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.386 16:48:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@353 -- # local d=1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.386 16:48:46 rpc -- scripts/common.sh@355 -- # echo 1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.386 16:48:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@353 -- # local d=2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.386 16:48:46 rpc -- scripts/common.sh@355 -- # echo 2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.386 16:48:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.386 16:48:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.386 16:48:46 rpc -- scripts/common.sh@368 -- # return 0 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.386 --rc genhtml_branch_coverage=1 00:07:17.386 --rc genhtml_function_coverage=1 00:07:17.386 --rc genhtml_legend=1 00:07:17.386 --rc geninfo_all_blocks=1 00:07:17.386 --rc geninfo_unexecuted_blocks=1 00:07:17.386 00:07:17.386 ' 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.386 --rc genhtml_branch_coverage=1 00:07:17.386 --rc genhtml_function_coverage=1 00:07:17.386 --rc genhtml_legend=1 00:07:17.386 --rc geninfo_all_blocks=1 00:07:17.386 --rc geninfo_unexecuted_blocks=1 00:07:17.386 00:07:17.386 ' 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.386 --rc genhtml_branch_coverage=1 00:07:17.386 --rc genhtml_function_coverage=1 00:07:17.386 --rc genhtml_legend=1 00:07:17.386 --rc geninfo_all_blocks=1 00:07:17.386 --rc geninfo_unexecuted_blocks=1 00:07:17.386 00:07:17.386 ' 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.386 --rc genhtml_branch_coverage=1 00:07:17.386 --rc genhtml_function_coverage=1 00:07:17.386 --rc genhtml_legend=1 00:07:17.386 --rc geninfo_all_blocks=1 00:07:17.386 --rc geninfo_unexecuted_blocks=1 00:07:17.386 00:07:17.386 ' 00:07:17.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.386 16:48:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69109 00:07:17.386 16:48:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.386 16:48:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69109 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@831 -- # '[' -z 69109 ']' 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.386 16:48:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.386 16:48:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.386 [2024-11-08 16:48:46.805234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:17.386 [2024-11-08 16:48:46.805362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69109 ] 00:07:17.646 [2024-11-08 16:48:46.966693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.646 [2024-11-08 16:48:47.012156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:17.646 [2024-11-08 16:48:47.012222] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69109' to capture a snapshot of events at runtime. 00:07:17.646 [2024-11-08 16:48:47.012237] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.646 [2024-11-08 16:48:47.012246] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.646 [2024-11-08 16:48:47.012259] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69109 for offline analysis/debug. 00:07:17.646 [2024-11-08 16:48:47.012294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.216 16:48:47 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.216 16:48:47 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.216 16:48:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:18.216 16:48:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:18.216 16:48:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:18.216 16:48:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:18.216 16:48:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.216 16:48:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.216 16:48:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.216 ************************************ 00:07:18.216 START TEST rpc_integrity 00:07:18.216 ************************************ 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.216 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:18.216 { 00:07:18.216 "name": "Malloc0", 00:07:18.216 "aliases": [ 00:07:18.216 "a2ff72da-b331-4410-9f24-ca6be3310adf" 00:07:18.216 ], 00:07:18.216 "product_name": "Malloc disk", 00:07:18.216 "block_size": 512, 00:07:18.216 "num_blocks": 16384, 00:07:18.216 "uuid": "a2ff72da-b331-4410-9f24-ca6be3310adf", 00:07:18.216 "assigned_rate_limits": { 00:07:18.216 "rw_ios_per_sec": 0, 00:07:18.216 "rw_mbytes_per_sec": 0, 00:07:18.216 "r_mbytes_per_sec": 0, 00:07:18.216 "w_mbytes_per_sec": 0 00:07:18.216 }, 00:07:18.216 "claimed": false, 00:07:18.216 "zoned": false, 00:07:18.216 "supported_io_types": { 00:07:18.216 "read": true, 00:07:18.216 "write": true, 00:07:18.216 "unmap": true, 00:07:18.216 "flush": true, 00:07:18.216 "reset": true, 00:07:18.216 "nvme_admin": false, 00:07:18.216 "nvme_io": false, 00:07:18.216 "nvme_io_md": false, 00:07:18.216 "write_zeroes": true, 00:07:18.216 "zcopy": true, 00:07:18.216 "get_zone_info": false, 00:07:18.216 "zone_management": false, 00:07:18.216 "zone_append": false, 00:07:18.216 "compare": false, 00:07:18.216 "compare_and_write": false, 00:07:18.216 "abort": true, 00:07:18.216 "seek_hole": false, 00:07:18.216 "seek_data": false, 00:07:18.216 "copy": true, 00:07:18.216 "nvme_iov_md": false 00:07:18.216 }, 00:07:18.216 "memory_domains": [ 00:07:18.216 { 00:07:18.216 "dma_device_id": "system", 00:07:18.216 "dma_device_type": 1 00:07:18.216 }, 00:07:18.216 { 00:07:18.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.216 "dma_device_type": 2 00:07:18.216 } 00:07:18.216 ], 00:07:18.216 "driver_specific": {} 00:07:18.216 } 00:07:18.216 ]' 00:07:18.216 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 [2024-11-08 16:48:47.750430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:18.477 [2024-11-08 16:48:47.750496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.477 [2024-11-08 16:48:47.750543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:18.477 [2024-11-08 16:48:47.750555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.477 [2024-11-08 16:48:47.752899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.477 [2024-11-08 16:48:47.752936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:18.477 Passthru0 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:18.477 { 00:07:18.477 "name": "Malloc0", 00:07:18.477 "aliases": [ 00:07:18.477 "a2ff72da-b331-4410-9f24-ca6be3310adf" 00:07:18.477 ], 00:07:18.477 "product_name": "Malloc disk", 00:07:18.477 "block_size": 512, 00:07:18.477 "num_blocks": 16384, 00:07:18.477 "uuid": "a2ff72da-b331-4410-9f24-ca6be3310adf", 00:07:18.477 "assigned_rate_limits": { 00:07:18.477 "rw_ios_per_sec": 0, 00:07:18.477 "rw_mbytes_per_sec": 0, 00:07:18.477 "r_mbytes_per_sec": 0, 00:07:18.477 "w_mbytes_per_sec": 0 00:07:18.477 }, 00:07:18.477 "claimed": true, 00:07:18.477 "claim_type": "exclusive_write", 00:07:18.477 "zoned": false, 00:07:18.477 "supported_io_types": { 00:07:18.477 "read": true, 00:07:18.477 "write": true, 00:07:18.477 "unmap": true, 00:07:18.477 "flush": true, 00:07:18.477 "reset": true, 00:07:18.477 "nvme_admin": false, 00:07:18.477 "nvme_io": false, 00:07:18.477 "nvme_io_md": false, 00:07:18.477 "write_zeroes": true, 00:07:18.477 "zcopy": true, 00:07:18.477 "get_zone_info": false, 00:07:18.477 "zone_management": false, 00:07:18.477 "zone_append": false, 00:07:18.477 "compare": false, 00:07:18.477 "compare_and_write": false, 00:07:18.477 "abort": true, 00:07:18.477 "seek_hole": false, 00:07:18.477 "seek_data": false, 00:07:18.477 "copy": true, 00:07:18.477 "nvme_iov_md": false 00:07:18.477 }, 00:07:18.477 "memory_domains": [ 00:07:18.477 { 00:07:18.477 "dma_device_id": "system", 00:07:18.477 "dma_device_type": 1 00:07:18.477 }, 00:07:18.477 { 00:07:18.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.477 "dma_device_type": 2 00:07:18.477 } 00:07:18.477 ], 00:07:18.477 "driver_specific": {} 00:07:18.477 }, 00:07:18.477 { 00:07:18.477 "name": "Passthru0", 00:07:18.477 "aliases": [ 00:07:18.477 "8dd0ae70-6e03-594c-92e7-6781deae37f3" 00:07:18.477 ], 00:07:18.477 "product_name": "passthru", 00:07:18.477 "block_size": 512, 00:07:18.477 "num_blocks": 16384, 00:07:18.477 "uuid": "8dd0ae70-6e03-594c-92e7-6781deae37f3", 00:07:18.477 "assigned_rate_limits": { 00:07:18.477 "rw_ios_per_sec": 0, 00:07:18.477 "rw_mbytes_per_sec": 0, 00:07:18.477 "r_mbytes_per_sec": 0, 00:07:18.477 "w_mbytes_per_sec": 0 00:07:18.477 }, 00:07:18.477 "claimed": false, 00:07:18.477 "zoned": false, 00:07:18.477 "supported_io_types": { 00:07:18.477 "read": true, 00:07:18.477 "write": true, 00:07:18.477 "unmap": true, 00:07:18.477 "flush": true, 00:07:18.477 "reset": true, 00:07:18.477 "nvme_admin": false, 00:07:18.477 "nvme_io": false, 00:07:18.477 "nvme_io_md": false, 00:07:18.477 "write_zeroes": true, 00:07:18.477 "zcopy": true, 00:07:18.477 "get_zone_info": false, 00:07:18.477 "zone_management": false, 00:07:18.477 "zone_append": false, 00:07:18.477 "compare": false, 00:07:18.477 "compare_and_write": false, 00:07:18.477 "abort": true, 00:07:18.477 "seek_hole": false, 00:07:18.477 "seek_data": false, 00:07:18.477 "copy": true, 00:07:18.477 "nvme_iov_md": false 00:07:18.477 }, 00:07:18.477 "memory_domains": [ 00:07:18.477 { 00:07:18.477 "dma_device_id": "system", 00:07:18.477 "dma_device_type": 1 00:07:18.477 }, 00:07:18.477 { 00:07:18.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.477 "dma_device_type": 2 00:07:18.477 } 00:07:18.477 ], 00:07:18.477 "driver_specific": { 00:07:18.477 "passthru": { 00:07:18.477 "name": "Passthru0", 00:07:18.477 "base_bdev_name": "Malloc0" 00:07:18.477 } 00:07:18.477 } 00:07:18.477 } 00:07:18.477 ]' 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:18.477 16:48:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:18.477 00:07:18.477 real 0m0.299s 00:07:18.477 user 0m0.182s 00:07:18.477 sys 0m0.046s 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 ************************************ 00:07:18.477 END TEST rpc_integrity 00:07:18.477 ************************************ 00:07:18.477 16:48:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:18.477 16:48:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.477 16:48:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.477 16:48:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 ************************************ 00:07:18.477 START TEST rpc_plugins 00:07:18.477 ************************************ 00:07:18.477 16:48:47 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:18.477 16:48:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:18.477 16:48:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 16:48:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:18.477 16:48:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:18.477 16:48:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.477 16:48:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.477 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.477 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:18.477 { 00:07:18.477 "name": "Malloc1", 00:07:18.477 "aliases": [ 00:07:18.477 "a5c04afe-953f-46cc-a8c6-6382e1dfe13d" 00:07:18.477 ], 00:07:18.477 "product_name": "Malloc disk", 00:07:18.477 "block_size": 4096, 00:07:18.477 "num_blocks": 256, 00:07:18.478 "uuid": "a5c04afe-953f-46cc-a8c6-6382e1dfe13d", 00:07:18.478 "assigned_rate_limits": { 00:07:18.478 "rw_ios_per_sec": 0, 00:07:18.478 "rw_mbytes_per_sec": 0, 00:07:18.478 "r_mbytes_per_sec": 0, 00:07:18.478 "w_mbytes_per_sec": 0 00:07:18.478 }, 00:07:18.478 "claimed": false, 00:07:18.478 "zoned": false, 00:07:18.478 "supported_io_types": { 00:07:18.478 "read": true, 00:07:18.478 "write": true, 00:07:18.478 "unmap": true, 00:07:18.478 "flush": true, 00:07:18.478 "reset": true, 00:07:18.478 "nvme_admin": false, 00:07:18.478 "nvme_io": false, 00:07:18.478 "nvme_io_md": false, 00:07:18.478 "write_zeroes": true, 00:07:18.478 "zcopy": true, 00:07:18.478 "get_zone_info": false, 00:07:18.478 "zone_management": false, 00:07:18.478 "zone_append": false, 00:07:18.478 "compare": false, 00:07:18.478 "compare_and_write": false, 00:07:18.478 "abort": true, 00:07:18.478 "seek_hole": false, 00:07:18.478 "seek_data": false, 00:07:18.478 "copy": true, 00:07:18.478 "nvme_iov_md": false 00:07:18.478 }, 00:07:18.478 "memory_domains": [ 00:07:18.478 { 00:07:18.478 "dma_device_id": "system", 00:07:18.478 "dma_device_type": 1 00:07:18.478 }, 00:07:18.478 { 00:07:18.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.478 "dma_device_type": 2 00:07:18.478 } 00:07:18.478 ], 00:07:18.478 "driver_specific": {} 00:07:18.478 } 00:07:18.478 ]' 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:18.737 16:48:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:18.737 00:07:18.737 real 0m0.154s 00:07:18.737 user 0m0.091s 00:07:18.737 sys 0m0.025s 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.737 16:48:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.737 ************************************ 00:07:18.737 END TEST rpc_plugins 00:07:18.737 ************************************ 00:07:18.737 16:48:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:18.737 16:48:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.737 16:48:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.737 16:48:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.737 ************************************ 00:07:18.737 START TEST rpc_trace_cmd_test 00:07:18.737 ************************************ 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.737 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:18.737 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69109", 00:07:18.737 "tpoint_group_mask": "0x8", 00:07:18.737 "iscsi_conn": { 00:07:18.737 "mask": "0x2", 00:07:18.737 "tpoint_mask": "0x0" 00:07:18.737 }, 00:07:18.737 "scsi": { 00:07:18.737 "mask": "0x4", 00:07:18.737 "tpoint_mask": "0x0" 00:07:18.737 }, 00:07:18.737 "bdev": { 00:07:18.737 "mask": "0x8", 00:07:18.737 "tpoint_mask": "0xffffffffffffffff" 00:07:18.737 }, 00:07:18.737 "nvmf_rdma": { 00:07:18.737 "mask": "0x10", 00:07:18.737 "tpoint_mask": "0x0" 00:07:18.737 }, 00:07:18.737 "nvmf_tcp": { 00:07:18.737 "mask": "0x20", 00:07:18.737 "tpoint_mask": "0x0" 00:07:18.737 }, 00:07:18.737 "ftl": { 00:07:18.737 "mask": "0x40", 00:07:18.737 "tpoint_mask": "0x0" 00:07:18.737 }, 00:07:18.737 "blobfs": { 00:07:18.737 "mask": "0x80", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "dsa": { 00:07:18.738 "mask": "0x200", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "thread": { 00:07:18.738 "mask": "0x400", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "nvme_pcie": { 00:07:18.738 "mask": "0x800", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "iaa": { 00:07:18.738 "mask": "0x1000", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "nvme_tcp": { 00:07:18.738 "mask": "0x2000", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "bdev_nvme": { 00:07:18.738 "mask": "0x4000", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "sock": { 00:07:18.738 "mask": "0x8000", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "blob": { 00:07:18.738 "mask": "0x10000", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 }, 00:07:18.738 "bdev_raid": { 00:07:18.738 "mask": "0x20000", 00:07:18.738 "tpoint_mask": "0x0" 00:07:18.738 } 00:07:18.738 }' 00:07:18.738 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:18.738 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:07:18.738 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:18.997 00:07:18.997 real 0m0.224s 00:07:18.997 user 0m0.176s 00:07:18.997 sys 0m0.038s 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.997 16:48:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.997 ************************************ 00:07:18.997 END TEST rpc_trace_cmd_test 00:07:18.997 ************************************ 00:07:18.997 16:48:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:18.997 16:48:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:18.997 16:48:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:18.997 16:48:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.997 16:48:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.997 16:48:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.997 ************************************ 00:07:18.997 START TEST rpc_daemon_integrity 00:07:18.997 ************************************ 00:07:18.997 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:18.997 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.998 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.257 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:19.257 { 00:07:19.257 "name": "Malloc2", 00:07:19.257 "aliases": [ 00:07:19.257 "3f9b263a-1b87-4362-961a-f4ce643acaa2" 00:07:19.257 ], 00:07:19.257 "product_name": "Malloc disk", 00:07:19.257 "block_size": 512, 00:07:19.257 "num_blocks": 16384, 00:07:19.257 "uuid": "3f9b263a-1b87-4362-961a-f4ce643acaa2", 00:07:19.257 "assigned_rate_limits": { 00:07:19.257 "rw_ios_per_sec": 0, 00:07:19.257 "rw_mbytes_per_sec": 0, 00:07:19.257 "r_mbytes_per_sec": 0, 00:07:19.257 "w_mbytes_per_sec": 0 00:07:19.257 }, 00:07:19.257 "claimed": false, 00:07:19.257 "zoned": false, 00:07:19.257 "supported_io_types": { 00:07:19.257 "read": true, 00:07:19.257 "write": true, 00:07:19.257 "unmap": true, 00:07:19.257 "flush": true, 00:07:19.257 "reset": true, 00:07:19.257 "nvme_admin": false, 00:07:19.257 "nvme_io": false, 00:07:19.257 "nvme_io_md": false, 00:07:19.257 "write_zeroes": true, 00:07:19.257 "zcopy": true, 00:07:19.257 "get_zone_info": false, 00:07:19.257 "zone_management": false, 00:07:19.257 "zone_append": false, 00:07:19.257 "compare": false, 00:07:19.257 "compare_and_write": false, 00:07:19.257 "abort": true, 00:07:19.257 "seek_hole": false, 00:07:19.257 "seek_data": false, 00:07:19.257 "copy": true, 00:07:19.257 "nvme_iov_md": false 00:07:19.257 }, 00:07:19.257 "memory_domains": [ 00:07:19.257 { 00:07:19.257 "dma_device_id": "system", 00:07:19.257 "dma_device_type": 1 00:07:19.257 }, 00:07:19.257 { 00:07:19.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.257 "dma_device_type": 2 00:07:19.257 } 00:07:19.257 ], 00:07:19.258 "driver_specific": {} 00:07:19.258 } 00:07:19.258 ]' 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 [2024-11-08 16:48:48.601507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:19.258 [2024-11-08 16:48:48.601569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.258 [2024-11-08 16:48:48.601592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:19.258 [2024-11-08 16:48:48.601601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.258 [2024-11-08 16:48:48.603995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.258 [2024-11-08 16:48:48.604035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:19.258 Passthru0 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:19.258 { 00:07:19.258 "name": "Malloc2", 00:07:19.258 "aliases": [ 00:07:19.258 "3f9b263a-1b87-4362-961a-f4ce643acaa2" 00:07:19.258 ], 00:07:19.258 "product_name": "Malloc disk", 00:07:19.258 "block_size": 512, 00:07:19.258 "num_blocks": 16384, 00:07:19.258 "uuid": "3f9b263a-1b87-4362-961a-f4ce643acaa2", 00:07:19.258 "assigned_rate_limits": { 00:07:19.258 "rw_ios_per_sec": 0, 00:07:19.258 "rw_mbytes_per_sec": 0, 00:07:19.258 "r_mbytes_per_sec": 0, 00:07:19.258 "w_mbytes_per_sec": 0 00:07:19.258 }, 00:07:19.258 "claimed": true, 00:07:19.258 "claim_type": "exclusive_write", 00:07:19.258 "zoned": false, 00:07:19.258 "supported_io_types": { 00:07:19.258 "read": true, 00:07:19.258 "write": true, 00:07:19.258 "unmap": true, 00:07:19.258 "flush": true, 00:07:19.258 "reset": true, 00:07:19.258 "nvme_admin": false, 00:07:19.258 "nvme_io": false, 00:07:19.258 "nvme_io_md": false, 00:07:19.258 "write_zeroes": true, 00:07:19.258 "zcopy": true, 00:07:19.258 "get_zone_info": false, 00:07:19.258 "zone_management": false, 00:07:19.258 "zone_append": false, 00:07:19.258 "compare": false, 00:07:19.258 "compare_and_write": false, 00:07:19.258 "abort": true, 00:07:19.258 "seek_hole": false, 00:07:19.258 "seek_data": false, 00:07:19.258 "copy": true, 00:07:19.258 "nvme_iov_md": false 00:07:19.258 }, 00:07:19.258 "memory_domains": [ 00:07:19.258 { 00:07:19.258 "dma_device_id": "system", 00:07:19.258 "dma_device_type": 1 00:07:19.258 }, 00:07:19.258 { 00:07:19.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.258 "dma_device_type": 2 00:07:19.258 } 00:07:19.258 ], 00:07:19.258 "driver_specific": {} 00:07:19.258 }, 00:07:19.258 { 00:07:19.258 "name": "Passthru0", 00:07:19.258 "aliases": [ 00:07:19.258 "8a599c77-83d7-5981-94f4-5bca42e92e82" 00:07:19.258 ], 00:07:19.258 "product_name": "passthru", 00:07:19.258 "block_size": 512, 00:07:19.258 "num_blocks": 16384, 00:07:19.258 "uuid": "8a599c77-83d7-5981-94f4-5bca42e92e82", 00:07:19.258 "assigned_rate_limits": { 00:07:19.258 "rw_ios_per_sec": 0, 00:07:19.258 "rw_mbytes_per_sec": 0, 00:07:19.258 "r_mbytes_per_sec": 0, 00:07:19.258 "w_mbytes_per_sec": 0 00:07:19.258 }, 00:07:19.258 "claimed": false, 00:07:19.258 "zoned": false, 00:07:19.258 "supported_io_types": { 00:07:19.258 "read": true, 00:07:19.258 "write": true, 00:07:19.258 "unmap": true, 00:07:19.258 "flush": true, 00:07:19.258 "reset": true, 00:07:19.258 "nvme_admin": false, 00:07:19.258 "nvme_io": false, 00:07:19.258 "nvme_io_md": false, 00:07:19.258 "write_zeroes": true, 00:07:19.258 "zcopy": true, 00:07:19.258 "get_zone_info": false, 00:07:19.258 "zone_management": false, 00:07:19.258 "zone_append": false, 00:07:19.258 "compare": false, 00:07:19.258 "compare_and_write": false, 00:07:19.258 "abort": true, 00:07:19.258 "seek_hole": false, 00:07:19.258 "seek_data": false, 00:07:19.258 "copy": true, 00:07:19.258 "nvme_iov_md": false 00:07:19.258 }, 00:07:19.258 "memory_domains": [ 00:07:19.258 { 00:07:19.258 "dma_device_id": "system", 00:07:19.258 "dma_device_type": 1 00:07:19.258 }, 00:07:19.258 { 00:07:19.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.258 "dma_device_type": 2 00:07:19.258 } 00:07:19.258 ], 00:07:19.258 "driver_specific": { 00:07:19.258 "passthru": { 00:07:19.258 "name": "Passthru0", 00:07:19.258 "base_bdev_name": "Malloc2" 00:07:19.258 } 00:07:19.258 } 00:07:19.258 } 00:07:19.258 ]' 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:19.258 00:07:19.258 real 0m0.313s 00:07:19.258 user 0m0.198s 00:07:19.258 sys 0m0.043s 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.258 16:48:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.258 ************************************ 00:07:19.258 END TEST rpc_daemon_integrity 00:07:19.258 ************************************ 00:07:19.516 16:48:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:19.516 16:48:48 rpc -- rpc/rpc.sh@84 -- # killprocess 69109 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@950 -- # '[' -z 69109 ']' 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@954 -- # kill -0 69109 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@955 -- # uname 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69109 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69109' 00:07:19.516 killing process with pid 69109 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@969 -- # kill 69109 00:07:19.516 16:48:48 rpc -- common/autotest_common.sh@974 -- # wait 69109 00:07:19.775 00:07:19.775 real 0m2.771s 00:07:19.775 user 0m3.285s 00:07:19.775 sys 0m0.834s 00:07:19.775 16:48:49 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.775 16:48:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.775 ************************************ 00:07:19.775 END TEST rpc 00:07:19.775 ************************************ 00:07:19.775 16:48:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:19.775 16:48:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.775 16:48:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.775 16:48:49 -- common/autotest_common.sh@10 -- # set +x 00:07:20.036 ************************************ 00:07:20.036 START TEST skip_rpc 00:07:20.036 ************************************ 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:20.036 * Looking for test storage... 00:07:20.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.036 16:48:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.036 --rc genhtml_branch_coverage=1 00:07:20.036 --rc genhtml_function_coverage=1 00:07:20.036 --rc genhtml_legend=1 00:07:20.036 --rc geninfo_all_blocks=1 00:07:20.036 --rc geninfo_unexecuted_blocks=1 00:07:20.036 00:07:20.036 ' 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.036 --rc genhtml_branch_coverage=1 00:07:20.036 --rc genhtml_function_coverage=1 00:07:20.036 --rc genhtml_legend=1 00:07:20.036 --rc geninfo_all_blocks=1 00:07:20.036 --rc geninfo_unexecuted_blocks=1 00:07:20.036 00:07:20.036 ' 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.036 --rc genhtml_branch_coverage=1 00:07:20.036 --rc genhtml_function_coverage=1 00:07:20.036 --rc genhtml_legend=1 00:07:20.036 --rc geninfo_all_blocks=1 00:07:20.036 --rc geninfo_unexecuted_blocks=1 00:07:20.036 00:07:20.036 ' 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.036 --rc genhtml_branch_coverage=1 00:07:20.036 --rc genhtml_function_coverage=1 00:07:20.036 --rc genhtml_legend=1 00:07:20.036 --rc geninfo_all_blocks=1 00:07:20.036 --rc geninfo_unexecuted_blocks=1 00:07:20.036 00:07:20.036 ' 00:07:20.036 16:48:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:20.036 16:48:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:20.036 16:48:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.036 16:48:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.036 ************************************ 00:07:20.036 START TEST skip_rpc 00:07:20.036 ************************************ 00:07:20.036 16:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:20.036 16:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69311 00:07:20.036 16:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:20.036 16:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.036 16:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:20.296 [2024-11-08 16:48:49.631086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:20.296 [2024-11-08 16:48:49.631212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69311 ] 00:07:20.296 [2024-11-08 16:48:49.793088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.555 [2024-11-08 16:48:49.839047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69311 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69311 ']' 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69311 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69311 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.831 killing process with pid 69311 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69311' 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69311 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69311 00:07:25.831 00:07:25.831 real 0m5.452s 00:07:25.831 user 0m5.032s 00:07:25.831 sys 0m0.346s 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.831 16:48:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.831 ************************************ 00:07:25.831 END TEST skip_rpc 00:07:25.831 ************************************ 00:07:25.831 16:48:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:25.831 16:48:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.831 16:48:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.831 16:48:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.831 ************************************ 00:07:25.831 START TEST skip_rpc_with_json 00:07:25.831 ************************************ 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69398 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69398 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69398 ']' 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.831 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:25.831 [2024-11-08 16:48:55.132619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.831 [2024-11-08 16:48:55.132767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69398 ] 00:07:25.832 [2024-11-08 16:48:55.294054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.832 [2024-11-08 16:48:55.340565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:26.772 [2024-11-08 16:48:55.936732] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:26.772 request: 00:07:26.772 { 00:07:26.772 "trtype": "tcp", 00:07:26.772 "method": "nvmf_get_transports", 00:07:26.772 "req_id": 1 00:07:26.772 } 00:07:26.772 Got JSON-RPC error response 00:07:26.772 response: 00:07:26.772 { 00:07:26.772 "code": -19, 00:07:26.772 "message": "No such device" 00:07:26.772 } 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:26.772 [2024-11-08 16:48:55.948802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.772 16:48:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:26.772 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.772 16:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:26.772 { 00:07:26.772 "subsystems": [ 00:07:26.772 { 00:07:26.772 "subsystem": "fsdev", 00:07:26.772 "config": [ 00:07:26.772 { 00:07:26.772 "method": "fsdev_set_opts", 00:07:26.772 "params": { 00:07:26.772 "fsdev_io_pool_size": 65535, 00:07:26.772 "fsdev_io_cache_size": 256 00:07:26.772 } 00:07:26.772 } 00:07:26.772 ] 00:07:26.772 }, 00:07:26.772 { 00:07:26.772 "subsystem": "keyring", 00:07:26.772 "config": [] 00:07:26.772 }, 00:07:26.772 { 00:07:26.772 "subsystem": "iobuf", 00:07:26.772 "config": [ 00:07:26.772 { 00:07:26.772 "method": "iobuf_set_options", 00:07:26.772 "params": { 00:07:26.772 "small_pool_count": 8192, 00:07:26.772 "large_pool_count": 1024, 00:07:26.772 "small_bufsize": 8192, 00:07:26.772 "large_bufsize": 135168 00:07:26.772 } 00:07:26.772 } 00:07:26.772 ] 00:07:26.772 }, 00:07:26.772 { 00:07:26.772 "subsystem": "sock", 00:07:26.772 "config": [ 00:07:26.772 { 00:07:26.772 "method": "sock_set_default_impl", 00:07:26.772 "params": { 00:07:26.772 "impl_name": "posix" 00:07:26.772 } 00:07:26.772 }, 00:07:26.772 { 00:07:26.772 "method": "sock_impl_set_options", 00:07:26.772 "params": { 00:07:26.772 "impl_name": "ssl", 00:07:26.772 "recv_buf_size": 4096, 00:07:26.772 "send_buf_size": 4096, 00:07:26.772 "enable_recv_pipe": true, 00:07:26.772 "enable_quickack": false, 00:07:26.772 "enable_placement_id": 0, 00:07:26.772 "enable_zerocopy_send_server": true, 00:07:26.772 "enable_zerocopy_send_client": false, 00:07:26.772 "zerocopy_threshold": 0, 00:07:26.772 "tls_version": 0, 00:07:26.772 "enable_ktls": false 00:07:26.772 } 00:07:26.772 }, 00:07:26.772 { 00:07:26.772 "method": "sock_impl_set_options", 00:07:26.772 "params": { 00:07:26.772 "impl_name": "posix", 00:07:26.772 "recv_buf_size": 2097152, 00:07:26.772 "send_buf_size": 2097152, 00:07:26.772 "enable_recv_pipe": true, 00:07:26.772 "enable_quickack": false, 00:07:26.772 "enable_placement_id": 0, 00:07:26.772 "enable_zerocopy_send_server": true, 00:07:26.773 "enable_zerocopy_send_client": false, 00:07:26.773 "zerocopy_threshold": 0, 00:07:26.773 "tls_version": 0, 00:07:26.773 "enable_ktls": false 00:07:26.773 } 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "vmd", 00:07:26.773 "config": [] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "accel", 00:07:26.773 "config": [ 00:07:26.773 { 00:07:26.773 "method": "accel_set_options", 00:07:26.773 "params": { 00:07:26.773 "small_cache_size": 128, 00:07:26.773 "large_cache_size": 16, 00:07:26.773 "task_count": 2048, 00:07:26.773 "sequence_count": 2048, 00:07:26.773 "buf_count": 2048 00:07:26.773 } 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "bdev", 00:07:26.773 "config": [ 00:07:26.773 { 00:07:26.773 "method": "bdev_set_options", 00:07:26.773 "params": { 00:07:26.773 "bdev_io_pool_size": 65535, 00:07:26.773 "bdev_io_cache_size": 256, 00:07:26.773 "bdev_auto_examine": true, 00:07:26.773 "iobuf_small_cache_size": 128, 00:07:26.773 "iobuf_large_cache_size": 16 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "bdev_raid_set_options", 00:07:26.773 "params": { 00:07:26.773 "process_window_size_kb": 1024, 00:07:26.773 "process_max_bandwidth_mb_sec": 0 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "bdev_iscsi_set_options", 00:07:26.773 "params": { 00:07:26.773 "timeout_sec": 30 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "bdev_nvme_set_options", 00:07:26.773 "params": { 00:07:26.773 "action_on_timeout": "none", 00:07:26.773 "timeout_us": 0, 00:07:26.773 "timeout_admin_us": 0, 00:07:26.773 "keep_alive_timeout_ms": 10000, 00:07:26.773 "arbitration_burst": 0, 00:07:26.773 "low_priority_weight": 0, 00:07:26.773 "medium_priority_weight": 0, 00:07:26.773 "high_priority_weight": 0, 00:07:26.773 "nvme_adminq_poll_period_us": 10000, 00:07:26.773 "nvme_ioq_poll_period_us": 0, 00:07:26.773 "io_queue_requests": 0, 00:07:26.773 "delay_cmd_submit": true, 00:07:26.773 "transport_retry_count": 4, 00:07:26.773 "bdev_retry_count": 3, 00:07:26.773 "transport_ack_timeout": 0, 00:07:26.773 "ctrlr_loss_timeout_sec": 0, 00:07:26.773 "reconnect_delay_sec": 0, 00:07:26.773 "fast_io_fail_timeout_sec": 0, 00:07:26.773 "disable_auto_failback": false, 00:07:26.773 "generate_uuids": false, 00:07:26.773 "transport_tos": 0, 00:07:26.773 "nvme_error_stat": false, 00:07:26.773 "rdma_srq_size": 0, 00:07:26.773 "io_path_stat": false, 00:07:26.773 "allow_accel_sequence": false, 00:07:26.773 "rdma_max_cq_size": 0, 00:07:26.773 "rdma_cm_event_timeout_ms": 0, 00:07:26.773 "dhchap_digests": [ 00:07:26.773 "sha256", 00:07:26.773 "sha384", 00:07:26.773 "sha512" 00:07:26.773 ], 00:07:26.773 "dhchap_dhgroups": [ 00:07:26.773 "null", 00:07:26.773 "ffdhe2048", 00:07:26.773 "ffdhe3072", 00:07:26.773 "ffdhe4096", 00:07:26.773 "ffdhe6144", 00:07:26.773 "ffdhe8192" 00:07:26.773 ] 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "bdev_nvme_set_hotplug", 00:07:26.773 "params": { 00:07:26.773 "period_us": 100000, 00:07:26.773 "enable": false 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "bdev_wait_for_examine" 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "scsi", 00:07:26.773 "config": null 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "scheduler", 00:07:26.773 "config": [ 00:07:26.773 { 00:07:26.773 "method": "framework_set_scheduler", 00:07:26.773 "params": { 00:07:26.773 "name": "static" 00:07:26.773 } 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "vhost_scsi", 00:07:26.773 "config": [] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "vhost_blk", 00:07:26.773 "config": [] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "ublk", 00:07:26.773 "config": [] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "nbd", 00:07:26.773 "config": [] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "nvmf", 00:07:26.773 "config": [ 00:07:26.773 { 00:07:26.773 "method": "nvmf_set_config", 00:07:26.773 "params": { 00:07:26.773 "discovery_filter": "match_any", 00:07:26.773 "admin_cmd_passthru": { 00:07:26.773 "identify_ctrlr": false 00:07:26.773 }, 00:07:26.773 "dhchap_digests": [ 00:07:26.773 "sha256", 00:07:26.773 "sha384", 00:07:26.773 "sha512" 00:07:26.773 ], 00:07:26.773 "dhchap_dhgroups": [ 00:07:26.773 "null", 00:07:26.773 "ffdhe2048", 00:07:26.773 "ffdhe3072", 00:07:26.773 "ffdhe4096", 00:07:26.773 "ffdhe6144", 00:07:26.773 "ffdhe8192" 00:07:26.773 ] 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "nvmf_set_max_subsystems", 00:07:26.773 "params": { 00:07:26.773 "max_subsystems": 1024 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "nvmf_set_crdt", 00:07:26.773 "params": { 00:07:26.773 "crdt1": 0, 00:07:26.773 "crdt2": 0, 00:07:26.773 "crdt3": 0 00:07:26.773 } 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "method": "nvmf_create_transport", 00:07:26.773 "params": { 00:07:26.773 "trtype": "TCP", 00:07:26.773 "max_queue_depth": 128, 00:07:26.773 "max_io_qpairs_per_ctrlr": 127, 00:07:26.773 "in_capsule_data_size": 4096, 00:07:26.773 "max_io_size": 131072, 00:07:26.773 "io_unit_size": 131072, 00:07:26.773 "max_aq_depth": 128, 00:07:26.773 "num_shared_buffers": 511, 00:07:26.773 "buf_cache_size": 4294967295, 00:07:26.773 "dif_insert_or_strip": false, 00:07:26.773 "zcopy": false, 00:07:26.773 "c2h_success": true, 00:07:26.773 "sock_priority": 0, 00:07:26.773 "abort_timeout_sec": 1, 00:07:26.773 "ack_timeout": 0, 00:07:26.773 "data_wr_pool_size": 0 00:07:26.773 } 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "subsystem": "iscsi", 00:07:26.773 "config": [ 00:07:26.773 { 00:07:26.773 "method": "iscsi_set_options", 00:07:26.773 "params": { 00:07:26.773 "node_base": "iqn.2016-06.io.spdk", 00:07:26.773 "max_sessions": 128, 00:07:26.773 "max_connections_per_session": 2, 00:07:26.773 "max_queue_depth": 64, 00:07:26.773 "default_time2wait": 2, 00:07:26.773 "default_time2retain": 20, 00:07:26.773 "first_burst_length": 8192, 00:07:26.773 "immediate_data": true, 00:07:26.773 "allow_duplicated_isid": false, 00:07:26.773 "error_recovery_level": 0, 00:07:26.773 "nop_timeout": 60, 00:07:26.773 "nop_in_interval": 30, 00:07:26.773 "disable_chap": false, 00:07:26.773 "require_chap": false, 00:07:26.773 "mutual_chap": false, 00:07:26.773 "chap_group": 0, 00:07:26.773 "max_large_datain_per_connection": 64, 00:07:26.773 "max_r2t_per_connection": 4, 00:07:26.773 "pdu_pool_size": 36864, 00:07:26.773 "immediate_data_pool_size": 16384, 00:07:26.773 "data_out_pool_size": 2048 00:07:26.773 } 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 } 00:07:26.773 ] 00:07:26.773 } 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69398 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69398 ']' 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69398 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69398 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.773 killing process with pid 69398 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69398' 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69398 00:07:26.773 16:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69398 00:07:27.034 16:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69427 00:07:27.034 16:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:27.034 16:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69427 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69427 ']' 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69427 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69427 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.320 killing process with pid 69427 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69427' 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69427 00:07:32.320 16:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69427 00:07:32.580 16:49:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:32.580 16:49:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:32.580 00:07:32.580 real 0m6.963s 00:07:32.580 user 0m6.456s 00:07:32.580 sys 0m0.781s 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:32.580 ************************************ 00:07:32.580 END TEST skip_rpc_with_json 00:07:32.580 ************************************ 00:07:32.580 16:49:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:32.580 16:49:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.580 16:49:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.580 16:49:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.580 ************************************ 00:07:32.580 START TEST skip_rpc_with_delay 00:07:32.580 ************************************ 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:32.580 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.840 [2024-11-08 16:49:02.163779] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:32.840 [2024-11-08 16:49:02.163891] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:32.840 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:32.840 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.840 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.840 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.840 00:07:32.840 real 0m0.156s 00:07:32.840 user 0m0.092s 00:07:32.840 sys 0m0.063s 00:07:32.840 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.840 16:49:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:32.840 ************************************ 00:07:32.840 END TEST skip_rpc_with_delay 00:07:32.840 ************************************ 00:07:32.840 16:49:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:32.840 16:49:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:32.840 16:49:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:32.840 16:49:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.840 16:49:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.840 16:49:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.840 ************************************ 00:07:32.840 START TEST exit_on_failed_rpc_init 00:07:32.840 ************************************ 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69534 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69534 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69534 ']' 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.840 16:49:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:33.100 [2024-11-08 16:49:02.388183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.100 [2024-11-08 16:49:02.388344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69534 ] 00:07:33.100 [2024-11-08 16:49:02.548274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.100 [2024-11-08 16:49:02.592258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.037 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.037 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:34.037 16:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:34.038 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:34.038 [2024-11-08 16:49:03.336585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.038 [2024-11-08 16:49:03.336734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69551 ] 00:07:34.038 [2024-11-08 16:49:03.498384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.038 [2024-11-08 16:49:03.546218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.038 [2024-11-08 16:49:03.546330] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:34.038 [2024-11-08 16:49:03.546354] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:34.038 [2024-11-08 16:49:03.546366] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69534 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69534 ']' 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69534 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69534 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.298 killing process with pid 69534 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69534' 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69534 00:07:34.298 16:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69534 00:07:34.866 00:07:34.866 real 0m1.804s 00:07:34.866 user 0m1.970s 00:07:34.866 sys 0m0.518s 00:07:34.866 16:49:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.866 16:49:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 ************************************ 00:07:34.866 END TEST exit_on_failed_rpc_init 00:07:34.866 ************************************ 00:07:34.866 16:49:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:34.866 00:07:34.866 real 0m14.851s 00:07:34.866 user 0m13.733s 00:07:34.866 sys 0m2.027s 00:07:34.866 16:49:04 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.866 16:49:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 ************************************ 00:07:34.866 END TEST skip_rpc 00:07:34.866 ************************************ 00:07:34.866 16:49:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:34.866 16:49:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.866 16:49:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.866 16:49:04 -- common/autotest_common.sh@10 -- # set +x 00:07:34.866 ************************************ 00:07:34.866 START TEST rpc_client 00:07:34.866 ************************************ 00:07:34.866 16:49:04 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:34.866 * Looking for test storage... 00:07:34.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:34.866 16:49:04 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.866 16:49:04 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.866 16:49:04 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.126 16:49:04 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.126 16:49:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:35.126 16:49:04 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.126 16:49:04 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.126 --rc genhtml_branch_coverage=1 00:07:35.126 --rc genhtml_function_coverage=1 00:07:35.127 --rc genhtml_legend=1 00:07:35.127 --rc geninfo_all_blocks=1 00:07:35.127 --rc geninfo_unexecuted_blocks=1 00:07:35.127 00:07:35.127 ' 00:07:35.127 16:49:04 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.127 --rc genhtml_branch_coverage=1 00:07:35.127 --rc genhtml_function_coverage=1 00:07:35.127 --rc genhtml_legend=1 00:07:35.127 --rc geninfo_all_blocks=1 00:07:35.127 --rc geninfo_unexecuted_blocks=1 00:07:35.127 00:07:35.127 ' 00:07:35.127 16:49:04 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.127 --rc genhtml_branch_coverage=1 00:07:35.127 --rc genhtml_function_coverage=1 00:07:35.127 --rc genhtml_legend=1 00:07:35.127 --rc geninfo_all_blocks=1 00:07:35.127 --rc geninfo_unexecuted_blocks=1 00:07:35.127 00:07:35.127 ' 00:07:35.127 16:49:04 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.127 --rc genhtml_branch_coverage=1 00:07:35.127 --rc genhtml_function_coverage=1 00:07:35.127 --rc genhtml_legend=1 00:07:35.127 --rc geninfo_all_blocks=1 00:07:35.127 --rc geninfo_unexecuted_blocks=1 00:07:35.127 00:07:35.127 ' 00:07:35.127 16:49:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:35.127 OK 00:07:35.127 16:49:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:35.127 00:07:35.127 real 0m0.284s 00:07:35.127 user 0m0.150s 00:07:35.127 sys 0m0.152s 00:07:35.127 16:49:04 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.127 16:49:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:35.127 ************************************ 00:07:35.127 END TEST rpc_client 00:07:35.127 ************************************ 00:07:35.127 16:49:04 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:35.127 16:49:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.127 16:49:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.127 16:49:04 -- common/autotest_common.sh@10 -- # set +x 00:07:35.127 ************************************ 00:07:35.127 START TEST json_config 00:07:35.127 ************************************ 00:07:35.127 16:49:04 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:35.127 16:49:04 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.127 16:49:04 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.127 16:49:04 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.386 16:49:04 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.386 16:49:04 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.386 16:49:04 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.386 16:49:04 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.387 16:49:04 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.387 16:49:04 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.387 16:49:04 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.387 16:49:04 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.387 16:49:04 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.387 16:49:04 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.387 16:49:04 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:35.387 16:49:04 json_config -- scripts/common.sh@345 -- # : 1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.387 16:49:04 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.387 16:49:04 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@353 -- # local d=1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.387 16:49:04 json_config -- scripts/common.sh@355 -- # echo 1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.387 16:49:04 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:35.387 16:49:04 json_config -- scripts/common.sh@353 -- # local d=2 00:07:35.387 16:49:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.387 16:49:04 json_config -- scripts/common.sh@355 -- # echo 2 00:07:35.387 16:49:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.387 16:49:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.387 16:49:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.387 16:49:04 json_config -- scripts/common.sh@368 -- # return 0 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.387 --rc genhtml_branch_coverage=1 00:07:35.387 --rc genhtml_function_coverage=1 00:07:35.387 --rc genhtml_legend=1 00:07:35.387 --rc geninfo_all_blocks=1 00:07:35.387 --rc geninfo_unexecuted_blocks=1 00:07:35.387 00:07:35.387 ' 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.387 --rc genhtml_branch_coverage=1 00:07:35.387 --rc genhtml_function_coverage=1 00:07:35.387 --rc genhtml_legend=1 00:07:35.387 --rc geninfo_all_blocks=1 00:07:35.387 --rc geninfo_unexecuted_blocks=1 00:07:35.387 00:07:35.387 ' 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.387 --rc genhtml_branch_coverage=1 00:07:35.387 --rc genhtml_function_coverage=1 00:07:35.387 --rc genhtml_legend=1 00:07:35.387 --rc geninfo_all_blocks=1 00:07:35.387 --rc geninfo_unexecuted_blocks=1 00:07:35.387 00:07:35.387 ' 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.387 --rc genhtml_branch_coverage=1 00:07:35.387 --rc genhtml_function_coverage=1 00:07:35.387 --rc genhtml_legend=1 00:07:35.387 --rc geninfo_all_blocks=1 00:07:35.387 --rc geninfo_unexecuted_blocks=1 00:07:35.387 00:07:35.387 ' 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a68b413-089f-4012-909f-922ea4c3e36c 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5a68b413-089f-4012-909f-922ea4c3e36c 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.387 16:49:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.387 16:49:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.387 16:49:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.387 16:49:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.387 16:49:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.387 16:49:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.387 16:49:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.387 16:49:04 json_config -- paths/export.sh@5 -- # export PATH 00:07:35.387 16:49:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@51 -- # : 0 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.387 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.387 16:49:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:35.387 WARNING: No tests are enabled so not running JSON configuration tests 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:35.387 16:49:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:35.387 00:07:35.387 real 0m0.228s 00:07:35.387 user 0m0.145s 00:07:35.387 sys 0m0.091s 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.387 16:49:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.387 ************************************ 00:07:35.387 END TEST json_config 00:07:35.387 ************************************ 00:07:35.387 16:49:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:35.387 16:49:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.387 16:49:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.387 16:49:04 -- common/autotest_common.sh@10 -- # set +x 00:07:35.387 ************************************ 00:07:35.387 START TEST json_config_extra_key 00:07:35.387 ************************************ 00:07:35.387 16:49:04 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:35.650 16:49:04 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.650 16:49:04 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.650 16:49:04 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.650 16:49:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:35.650 16:49:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.651 --rc genhtml_branch_coverage=1 00:07:35.651 --rc genhtml_function_coverage=1 00:07:35.651 --rc genhtml_legend=1 00:07:35.651 --rc geninfo_all_blocks=1 00:07:35.651 --rc geninfo_unexecuted_blocks=1 00:07:35.651 00:07:35.651 ' 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.651 --rc genhtml_branch_coverage=1 00:07:35.651 --rc genhtml_function_coverage=1 00:07:35.651 --rc genhtml_legend=1 00:07:35.651 --rc geninfo_all_blocks=1 00:07:35.651 --rc geninfo_unexecuted_blocks=1 00:07:35.651 00:07:35.651 ' 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.651 --rc genhtml_branch_coverage=1 00:07:35.651 --rc genhtml_function_coverage=1 00:07:35.651 --rc genhtml_legend=1 00:07:35.651 --rc geninfo_all_blocks=1 00:07:35.651 --rc geninfo_unexecuted_blocks=1 00:07:35.651 00:07:35.651 ' 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.651 --rc genhtml_branch_coverage=1 00:07:35.651 --rc genhtml_function_coverage=1 00:07:35.651 --rc genhtml_legend=1 00:07:35.651 --rc geninfo_all_blocks=1 00:07:35.651 --rc geninfo_unexecuted_blocks=1 00:07:35.651 00:07:35.651 ' 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5a68b413-089f-4012-909f-922ea4c3e36c 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5a68b413-089f-4012-909f-922ea4c3e36c 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.651 16:49:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.651 16:49:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.651 16:49:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.651 16:49:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.651 16:49:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:35.651 16:49:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.651 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.651 16:49:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:35.651 INFO: launching applications... 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:35.651 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69739 00:07:35.651 Waiting for target to run... 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69739 /var/tmp/spdk_tgt.sock 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69739 ']' 00:07:35.651 16:49:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.651 16:49:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:35.651 [2024-11-08 16:49:05.168271] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.651 [2024-11-08 16:49:05.168424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69739 ] 00:07:36.221 [2024-11-08 16:49:05.531984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.221 [2024-11-08 16:49:05.562246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.480 16:49:05 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.480 16:49:05 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:36.480 00:07:36.480 INFO: shutting down applications... 00:07:36.480 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:36.480 16:49:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69739 ]] 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69739 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69739 00:07:36.480 16:49:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69739 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:37.050 SPDK target shutdown done 00:07:37.050 16:49:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:37.050 Success 00:07:37.050 16:49:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:37.050 00:07:37.050 real 0m1.651s 00:07:37.050 user 0m1.365s 00:07:37.050 sys 0m0.487s 00:07:37.050 16:49:06 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.050 16:49:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:37.050 ************************************ 00:07:37.050 END TEST json_config_extra_key 00:07:37.050 ************************************ 00:07:37.050 16:49:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:37.050 16:49:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.050 16:49:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.050 16:49:06 -- common/autotest_common.sh@10 -- # set +x 00:07:37.050 ************************************ 00:07:37.050 START TEST alias_rpc 00:07:37.050 ************************************ 00:07:37.050 16:49:06 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:37.311 * Looking for test storage... 00:07:37.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.311 16:49:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.311 --rc genhtml_branch_coverage=1 00:07:37.311 --rc genhtml_function_coverage=1 00:07:37.311 --rc genhtml_legend=1 00:07:37.311 --rc geninfo_all_blocks=1 00:07:37.311 --rc geninfo_unexecuted_blocks=1 00:07:37.311 00:07:37.311 ' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.311 --rc genhtml_branch_coverage=1 00:07:37.311 --rc genhtml_function_coverage=1 00:07:37.311 --rc genhtml_legend=1 00:07:37.311 --rc geninfo_all_blocks=1 00:07:37.311 --rc geninfo_unexecuted_blocks=1 00:07:37.311 00:07:37.311 ' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.311 --rc genhtml_branch_coverage=1 00:07:37.311 --rc genhtml_function_coverage=1 00:07:37.311 --rc genhtml_legend=1 00:07:37.311 --rc geninfo_all_blocks=1 00:07:37.311 --rc geninfo_unexecuted_blocks=1 00:07:37.311 00:07:37.311 ' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.311 --rc genhtml_branch_coverage=1 00:07:37.311 --rc genhtml_function_coverage=1 00:07:37.311 --rc genhtml_legend=1 00:07:37.311 --rc geninfo_all_blocks=1 00:07:37.311 --rc geninfo_unexecuted_blocks=1 00:07:37.311 00:07:37.311 ' 00:07:37.311 16:49:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:37.311 16:49:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69818 00:07:37.311 16:49:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:37.311 16:49:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69818 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69818 ']' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.311 16:49:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.571 [2024-11-08 16:49:06.880485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.571 [2024-11-08 16:49:06.880622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69818 ] 00:07:37.571 [2024-11-08 16:49:07.040251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.571 [2024-11-08 16:49:07.084994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:38.511 16:49:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:38.511 16:49:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69818 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69818 ']' 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69818 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69818 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.511 killing process with pid 69818 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69818' 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@969 -- # kill 69818 00:07:38.511 16:49:07 alias_rpc -- common/autotest_common.sh@974 -- # wait 69818 00:07:39.079 00:07:39.079 real 0m1.786s 00:07:39.079 user 0m1.818s 00:07:39.079 sys 0m0.505s 00:07:39.079 16:49:08 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.079 16:49:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.079 ************************************ 00:07:39.079 END TEST alias_rpc 00:07:39.079 ************************************ 00:07:39.079 16:49:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:39.079 16:49:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:39.079 16:49:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.079 16:49:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.079 16:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:39.079 ************************************ 00:07:39.079 START TEST spdkcli_tcp 00:07:39.079 ************************************ 00:07:39.079 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:39.079 * Looking for test storage... 00:07:39.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:39.079 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.079 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.079 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.339 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:39.339 16:49:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.340 16:49:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:39.340 16:49:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.340 16:49:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.340 16:49:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.340 16:49:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.340 --rc genhtml_branch_coverage=1 00:07:39.340 --rc genhtml_function_coverage=1 00:07:39.340 --rc genhtml_legend=1 00:07:39.340 --rc geninfo_all_blocks=1 00:07:39.340 --rc geninfo_unexecuted_blocks=1 00:07:39.340 00:07:39.340 ' 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.340 --rc genhtml_branch_coverage=1 00:07:39.340 --rc genhtml_function_coverage=1 00:07:39.340 --rc genhtml_legend=1 00:07:39.340 --rc geninfo_all_blocks=1 00:07:39.340 --rc geninfo_unexecuted_blocks=1 00:07:39.340 00:07:39.340 ' 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.340 --rc genhtml_branch_coverage=1 00:07:39.340 --rc genhtml_function_coverage=1 00:07:39.340 --rc genhtml_legend=1 00:07:39.340 --rc geninfo_all_blocks=1 00:07:39.340 --rc geninfo_unexecuted_blocks=1 00:07:39.340 00:07:39.340 ' 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.340 --rc genhtml_branch_coverage=1 00:07:39.340 --rc genhtml_function_coverage=1 00:07:39.340 --rc genhtml_legend=1 00:07:39.340 --rc geninfo_all_blocks=1 00:07:39.340 --rc geninfo_unexecuted_blocks=1 00:07:39.340 00:07:39.340 ' 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69903 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:39.340 16:49:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69903 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69903 ']' 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.340 16:49:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.340 [2024-11-08 16:49:08.752023] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.340 [2024-11-08 16:49:08.752162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69903 ] 00:07:39.599 [2024-11-08 16:49:08.912291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.599 [2024-11-08 16:49:08.965804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.599 [2024-11-08 16:49:08.965932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.168 16:49:09 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.168 16:49:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:40.168 16:49:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69909 00:07:40.168 16:49:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:40.168 16:49:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:40.426 [ 00:07:40.426 "bdev_malloc_delete", 00:07:40.426 "bdev_malloc_create", 00:07:40.426 "bdev_null_resize", 00:07:40.426 "bdev_null_delete", 00:07:40.426 "bdev_null_create", 00:07:40.426 "bdev_nvme_cuse_unregister", 00:07:40.426 "bdev_nvme_cuse_register", 00:07:40.426 "bdev_opal_new_user", 00:07:40.426 "bdev_opal_set_lock_state", 00:07:40.426 "bdev_opal_delete", 00:07:40.426 "bdev_opal_get_info", 00:07:40.426 "bdev_opal_create", 00:07:40.426 "bdev_nvme_opal_revert", 00:07:40.426 "bdev_nvme_opal_init", 00:07:40.426 "bdev_nvme_send_cmd", 00:07:40.426 "bdev_nvme_set_keys", 00:07:40.426 "bdev_nvme_get_path_iostat", 00:07:40.426 "bdev_nvme_get_mdns_discovery_info", 00:07:40.426 "bdev_nvme_stop_mdns_discovery", 00:07:40.426 "bdev_nvme_start_mdns_discovery", 00:07:40.426 "bdev_nvme_set_multipath_policy", 00:07:40.426 "bdev_nvme_set_preferred_path", 00:07:40.426 "bdev_nvme_get_io_paths", 00:07:40.426 "bdev_nvme_remove_error_injection", 00:07:40.426 "bdev_nvme_add_error_injection", 00:07:40.426 "bdev_nvme_get_discovery_info", 00:07:40.426 "bdev_nvme_stop_discovery", 00:07:40.426 "bdev_nvme_start_discovery", 00:07:40.426 "bdev_nvme_get_controller_health_info", 00:07:40.426 "bdev_nvme_disable_controller", 00:07:40.426 "bdev_nvme_enable_controller", 00:07:40.426 "bdev_nvme_reset_controller", 00:07:40.426 "bdev_nvme_get_transport_statistics", 00:07:40.426 "bdev_nvme_apply_firmware", 00:07:40.426 "bdev_nvme_detach_controller", 00:07:40.426 "bdev_nvme_get_controllers", 00:07:40.426 "bdev_nvme_attach_controller", 00:07:40.426 "bdev_nvme_set_hotplug", 00:07:40.426 "bdev_nvme_set_options", 00:07:40.426 "bdev_passthru_delete", 00:07:40.427 "bdev_passthru_create", 00:07:40.427 "bdev_lvol_set_parent_bdev", 00:07:40.427 "bdev_lvol_set_parent", 00:07:40.427 "bdev_lvol_check_shallow_copy", 00:07:40.427 "bdev_lvol_start_shallow_copy", 00:07:40.427 "bdev_lvol_grow_lvstore", 00:07:40.427 "bdev_lvol_get_lvols", 00:07:40.427 "bdev_lvol_get_lvstores", 00:07:40.427 "bdev_lvol_delete", 00:07:40.427 "bdev_lvol_set_read_only", 00:07:40.427 "bdev_lvol_resize", 00:07:40.427 "bdev_lvol_decouple_parent", 00:07:40.427 "bdev_lvol_inflate", 00:07:40.427 "bdev_lvol_rename", 00:07:40.427 "bdev_lvol_clone_bdev", 00:07:40.427 "bdev_lvol_clone", 00:07:40.427 "bdev_lvol_snapshot", 00:07:40.427 "bdev_lvol_create", 00:07:40.427 "bdev_lvol_delete_lvstore", 00:07:40.427 "bdev_lvol_rename_lvstore", 00:07:40.427 "bdev_lvol_create_lvstore", 00:07:40.427 "bdev_raid_set_options", 00:07:40.427 "bdev_raid_remove_base_bdev", 00:07:40.427 "bdev_raid_add_base_bdev", 00:07:40.427 "bdev_raid_delete", 00:07:40.427 "bdev_raid_create", 00:07:40.427 "bdev_raid_get_bdevs", 00:07:40.427 "bdev_error_inject_error", 00:07:40.427 "bdev_error_delete", 00:07:40.427 "bdev_error_create", 00:07:40.427 "bdev_split_delete", 00:07:40.427 "bdev_split_create", 00:07:40.427 "bdev_delay_delete", 00:07:40.427 "bdev_delay_create", 00:07:40.427 "bdev_delay_update_latency", 00:07:40.427 "bdev_zone_block_delete", 00:07:40.427 "bdev_zone_block_create", 00:07:40.427 "blobfs_create", 00:07:40.427 "blobfs_detect", 00:07:40.427 "blobfs_set_cache_size", 00:07:40.427 "bdev_aio_delete", 00:07:40.427 "bdev_aio_rescan", 00:07:40.427 "bdev_aio_create", 00:07:40.427 "bdev_ftl_set_property", 00:07:40.427 "bdev_ftl_get_properties", 00:07:40.427 "bdev_ftl_get_stats", 00:07:40.427 "bdev_ftl_unmap", 00:07:40.427 "bdev_ftl_unload", 00:07:40.427 "bdev_ftl_delete", 00:07:40.427 "bdev_ftl_load", 00:07:40.427 "bdev_ftl_create", 00:07:40.427 "bdev_virtio_attach_controller", 00:07:40.427 "bdev_virtio_scsi_get_devices", 00:07:40.427 "bdev_virtio_detach_controller", 00:07:40.427 "bdev_virtio_blk_set_hotplug", 00:07:40.427 "bdev_iscsi_delete", 00:07:40.427 "bdev_iscsi_create", 00:07:40.427 "bdev_iscsi_set_options", 00:07:40.427 "accel_error_inject_error", 00:07:40.427 "ioat_scan_accel_module", 00:07:40.427 "dsa_scan_accel_module", 00:07:40.427 "iaa_scan_accel_module", 00:07:40.427 "keyring_file_remove_key", 00:07:40.427 "keyring_file_add_key", 00:07:40.427 "keyring_linux_set_options", 00:07:40.427 "fsdev_aio_delete", 00:07:40.427 "fsdev_aio_create", 00:07:40.427 "iscsi_get_histogram", 00:07:40.427 "iscsi_enable_histogram", 00:07:40.427 "iscsi_set_options", 00:07:40.427 "iscsi_get_auth_groups", 00:07:40.427 "iscsi_auth_group_remove_secret", 00:07:40.427 "iscsi_auth_group_add_secret", 00:07:40.427 "iscsi_delete_auth_group", 00:07:40.427 "iscsi_create_auth_group", 00:07:40.427 "iscsi_set_discovery_auth", 00:07:40.427 "iscsi_get_options", 00:07:40.427 "iscsi_target_node_request_logout", 00:07:40.427 "iscsi_target_node_set_redirect", 00:07:40.427 "iscsi_target_node_set_auth", 00:07:40.427 "iscsi_target_node_add_lun", 00:07:40.427 "iscsi_get_stats", 00:07:40.427 "iscsi_get_connections", 00:07:40.427 "iscsi_portal_group_set_auth", 00:07:40.427 "iscsi_start_portal_group", 00:07:40.427 "iscsi_delete_portal_group", 00:07:40.427 "iscsi_create_portal_group", 00:07:40.427 "iscsi_get_portal_groups", 00:07:40.427 "iscsi_delete_target_node", 00:07:40.427 "iscsi_target_node_remove_pg_ig_maps", 00:07:40.427 "iscsi_target_node_add_pg_ig_maps", 00:07:40.427 "iscsi_create_target_node", 00:07:40.427 "iscsi_get_target_nodes", 00:07:40.427 "iscsi_delete_initiator_group", 00:07:40.427 "iscsi_initiator_group_remove_initiators", 00:07:40.427 "iscsi_initiator_group_add_initiators", 00:07:40.427 "iscsi_create_initiator_group", 00:07:40.427 "iscsi_get_initiator_groups", 00:07:40.427 "nvmf_set_crdt", 00:07:40.427 "nvmf_set_config", 00:07:40.427 "nvmf_set_max_subsystems", 00:07:40.427 "nvmf_stop_mdns_prr", 00:07:40.427 "nvmf_publish_mdns_prr", 00:07:40.427 "nvmf_subsystem_get_listeners", 00:07:40.427 "nvmf_subsystem_get_qpairs", 00:07:40.427 "nvmf_subsystem_get_controllers", 00:07:40.427 "nvmf_get_stats", 00:07:40.427 "nvmf_get_transports", 00:07:40.427 "nvmf_create_transport", 00:07:40.427 "nvmf_get_targets", 00:07:40.427 "nvmf_delete_target", 00:07:40.427 "nvmf_create_target", 00:07:40.427 "nvmf_subsystem_allow_any_host", 00:07:40.427 "nvmf_subsystem_set_keys", 00:07:40.427 "nvmf_subsystem_remove_host", 00:07:40.427 "nvmf_subsystem_add_host", 00:07:40.427 "nvmf_ns_remove_host", 00:07:40.427 "nvmf_ns_add_host", 00:07:40.427 "nvmf_subsystem_remove_ns", 00:07:40.427 "nvmf_subsystem_set_ns_ana_group", 00:07:40.427 "nvmf_subsystem_add_ns", 00:07:40.427 "nvmf_subsystem_listener_set_ana_state", 00:07:40.427 "nvmf_discovery_get_referrals", 00:07:40.427 "nvmf_discovery_remove_referral", 00:07:40.427 "nvmf_discovery_add_referral", 00:07:40.427 "nvmf_subsystem_remove_listener", 00:07:40.427 "nvmf_subsystem_add_listener", 00:07:40.427 "nvmf_delete_subsystem", 00:07:40.427 "nvmf_create_subsystem", 00:07:40.427 "nvmf_get_subsystems", 00:07:40.427 "env_dpdk_get_mem_stats", 00:07:40.427 "nbd_get_disks", 00:07:40.427 "nbd_stop_disk", 00:07:40.427 "nbd_start_disk", 00:07:40.427 "ublk_recover_disk", 00:07:40.427 "ublk_get_disks", 00:07:40.427 "ublk_stop_disk", 00:07:40.427 "ublk_start_disk", 00:07:40.427 "ublk_destroy_target", 00:07:40.427 "ublk_create_target", 00:07:40.427 "virtio_blk_create_transport", 00:07:40.427 "virtio_blk_get_transports", 00:07:40.427 "vhost_controller_set_coalescing", 00:07:40.427 "vhost_get_controllers", 00:07:40.427 "vhost_delete_controller", 00:07:40.427 "vhost_create_blk_controller", 00:07:40.427 "vhost_scsi_controller_remove_target", 00:07:40.427 "vhost_scsi_controller_add_target", 00:07:40.427 "vhost_start_scsi_controller", 00:07:40.427 "vhost_create_scsi_controller", 00:07:40.427 "thread_set_cpumask", 00:07:40.427 "scheduler_set_options", 00:07:40.427 "framework_get_governor", 00:07:40.427 "framework_get_scheduler", 00:07:40.427 "framework_set_scheduler", 00:07:40.427 "framework_get_reactors", 00:07:40.427 "thread_get_io_channels", 00:07:40.427 "thread_get_pollers", 00:07:40.427 "thread_get_stats", 00:07:40.427 "framework_monitor_context_switch", 00:07:40.427 "spdk_kill_instance", 00:07:40.427 "log_enable_timestamps", 00:07:40.427 "log_get_flags", 00:07:40.427 "log_clear_flag", 00:07:40.427 "log_set_flag", 00:07:40.427 "log_get_level", 00:07:40.427 "log_set_level", 00:07:40.427 "log_get_print_level", 00:07:40.427 "log_set_print_level", 00:07:40.427 "framework_enable_cpumask_locks", 00:07:40.427 "framework_disable_cpumask_locks", 00:07:40.427 "framework_wait_init", 00:07:40.427 "framework_start_init", 00:07:40.427 "scsi_get_devices", 00:07:40.427 "bdev_get_histogram", 00:07:40.427 "bdev_enable_histogram", 00:07:40.427 "bdev_set_qos_limit", 00:07:40.427 "bdev_set_qd_sampling_period", 00:07:40.427 "bdev_get_bdevs", 00:07:40.427 "bdev_reset_iostat", 00:07:40.427 "bdev_get_iostat", 00:07:40.427 "bdev_examine", 00:07:40.427 "bdev_wait_for_examine", 00:07:40.427 "bdev_set_options", 00:07:40.427 "accel_get_stats", 00:07:40.427 "accel_set_options", 00:07:40.427 "accel_set_driver", 00:07:40.427 "accel_crypto_key_destroy", 00:07:40.427 "accel_crypto_keys_get", 00:07:40.427 "accel_crypto_key_create", 00:07:40.427 "accel_assign_opc", 00:07:40.427 "accel_get_module_info", 00:07:40.427 "accel_get_opc_assignments", 00:07:40.427 "vmd_rescan", 00:07:40.427 "vmd_remove_device", 00:07:40.427 "vmd_enable", 00:07:40.427 "sock_get_default_impl", 00:07:40.427 "sock_set_default_impl", 00:07:40.427 "sock_impl_set_options", 00:07:40.427 "sock_impl_get_options", 00:07:40.427 "iobuf_get_stats", 00:07:40.427 "iobuf_set_options", 00:07:40.427 "keyring_get_keys", 00:07:40.427 "framework_get_pci_devices", 00:07:40.427 "framework_get_config", 00:07:40.427 "framework_get_subsystems", 00:07:40.427 "fsdev_set_opts", 00:07:40.427 "fsdev_get_opts", 00:07:40.427 "trace_get_info", 00:07:40.427 "trace_get_tpoint_group_mask", 00:07:40.427 "trace_disable_tpoint_group", 00:07:40.427 "trace_enable_tpoint_group", 00:07:40.427 "trace_clear_tpoint_mask", 00:07:40.427 "trace_set_tpoint_mask", 00:07:40.427 "notify_get_notifications", 00:07:40.427 "notify_get_types", 00:07:40.427 "spdk_get_version", 00:07:40.427 "rpc_get_methods" 00:07:40.427 ] 00:07:40.427 16:49:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.427 16:49:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:40.427 16:49:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69903 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69903 ']' 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69903 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69903 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.427 killing process with pid 69903 00:07:40.427 16:49:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69903' 00:07:40.428 16:49:09 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69903 00:07:40.428 16:49:09 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69903 00:07:40.992 00:07:40.992 real 0m1.818s 00:07:40.992 user 0m2.994s 00:07:40.992 sys 0m0.563s 00:07:40.992 16:49:10 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.992 16:49:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.992 ************************************ 00:07:40.992 END TEST spdkcli_tcp 00:07:40.992 ************************************ 00:07:40.992 16:49:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:40.992 16:49:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.992 16:49:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.992 16:49:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.992 ************************************ 00:07:40.992 START TEST dpdk_mem_utility 00:07:40.992 ************************************ 00:07:40.992 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:40.992 * Looking for test storage... 00:07:40.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:40.992 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.992 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.992 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.992 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.992 16:49:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.992 16:49:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.993 16:49:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.251 16:49:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:41.251 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.251 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.251 --rc genhtml_branch_coverage=1 00:07:41.251 --rc genhtml_function_coverage=1 00:07:41.251 --rc genhtml_legend=1 00:07:41.251 --rc geninfo_all_blocks=1 00:07:41.251 --rc geninfo_unexecuted_blocks=1 00:07:41.251 00:07:41.251 ' 00:07:41.251 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.251 --rc genhtml_branch_coverage=1 00:07:41.251 --rc genhtml_function_coverage=1 00:07:41.251 --rc genhtml_legend=1 00:07:41.251 --rc geninfo_all_blocks=1 00:07:41.251 --rc geninfo_unexecuted_blocks=1 00:07:41.252 00:07:41.252 ' 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.252 --rc genhtml_branch_coverage=1 00:07:41.252 --rc genhtml_function_coverage=1 00:07:41.252 --rc genhtml_legend=1 00:07:41.252 --rc geninfo_all_blocks=1 00:07:41.252 --rc geninfo_unexecuted_blocks=1 00:07:41.252 00:07:41.252 ' 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.252 --rc genhtml_branch_coverage=1 00:07:41.252 --rc genhtml_function_coverage=1 00:07:41.252 --rc genhtml_legend=1 00:07:41.252 --rc geninfo_all_blocks=1 00:07:41.252 --rc geninfo_unexecuted_blocks=1 00:07:41.252 00:07:41.252 ' 00:07:41.252 16:49:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:41.252 16:49:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69992 00:07:41.252 16:49:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.252 16:49:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69992 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69992 ']' 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.252 16:49:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.252 [2024-11-08 16:49:10.613279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.252 [2024-11-08 16:49:10.613437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69992 ] 00:07:41.252 [2024-11-08 16:49:10.772729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.511 [2024-11-08 16:49:10.815291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.081 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.081 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:42.081 16:49:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:42.081 16:49:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:42.081 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.081 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.081 { 00:07:42.081 "filename": "/tmp/spdk_mem_dump.txt" 00:07:42.081 } 00:07:42.081 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.081 16:49:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:42.081 DPDK memory size 860.000000 MiB in 1 heap(s) 00:07:42.081 1 heaps totaling size 860.000000 MiB 00:07:42.081 size: 860.000000 MiB heap id: 0 00:07:42.081 end heaps---------- 00:07:42.081 9 mempools totaling size 642.649841 MiB 00:07:42.081 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:42.081 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:42.081 size: 92.545471 MiB name: bdev_io_69992 00:07:42.081 size: 51.011292 MiB name: evtpool_69992 00:07:42.081 size: 50.003479 MiB name: msgpool_69992 00:07:42.081 size: 36.509338 MiB name: fsdev_io_69992 00:07:42.081 size: 21.763794 MiB name: PDU_Pool 00:07:42.081 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:42.081 size: 0.026123 MiB name: Session_Pool 00:07:42.081 end mempools------- 00:07:42.081 6 memzones totaling size 4.142822 MiB 00:07:42.081 size: 1.000366 MiB name: RG_ring_0_69992 00:07:42.081 size: 1.000366 MiB name: RG_ring_1_69992 00:07:42.081 size: 1.000366 MiB name: RG_ring_4_69992 00:07:42.081 size: 1.000366 MiB name: RG_ring_5_69992 00:07:42.081 size: 0.125366 MiB name: RG_ring_2_69992 00:07:42.081 size: 0.015991 MiB name: RG_ring_3_69992 00:07:42.081 end memzones------- 00:07:42.081 16:49:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:42.081 heap id: 0 total size: 860.000000 MiB number of busy elements: 321 number of free elements: 16 00:07:42.081 list of free elements. size: 13.933960 MiB 00:07:42.081 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:42.081 element at address: 0x200000800000 with size: 1.996948 MiB 00:07:42.081 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:07:42.081 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:42.081 element at address: 0x200034a00000 with size: 0.994446 MiB 00:07:42.081 element at address: 0x200009600000 with size: 0.959839 MiB 00:07:42.081 element at address: 0x200015e00000 with size: 0.954285 MiB 00:07:42.081 element at address: 0x20001c000000 with size: 0.936584 MiB 00:07:42.081 element at address: 0x200000200000 with size: 0.835022 MiB 00:07:42.081 element at address: 0x20001d800000 with size: 0.567139 MiB 00:07:42.081 element at address: 0x20000d800000 with size: 0.489258 MiB 00:07:42.081 element at address: 0x200003e00000 with size: 0.487366 MiB 00:07:42.081 element at address: 0x20001c200000 with size: 0.485657 MiB 00:07:42.081 element at address: 0x200007000000 with size: 0.480286 MiB 00:07:42.081 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:07:42.081 element at address: 0x200003a00000 with size: 0.352112 MiB 00:07:42.081 list of standard malloc elements. size: 199.269348 MiB 00:07:42.081 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:07:42.081 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:07:42.081 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:07:42.081 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:42.081 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:42.081 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:42.081 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:07:42.081 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:42.081 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:07:42.081 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:42.081 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003aff880 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707af40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b000 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b180 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b240 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b300 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b480 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b540 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b600 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:07:42.082 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891300 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891480 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891540 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891600 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891780 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891840 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891900 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892080 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892140 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892200 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892380 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892440 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892500 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892680 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892740 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892800 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:07:42.082 element at address: 0x20001d892980 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893040 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893100 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893280 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893340 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893400 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893580 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893640 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893700 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893880 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893940 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894000 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894180 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894240 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894300 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894480 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894540 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894600 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894780 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894840 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894900 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d895080 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d895140 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d895200 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d895380 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20001d895440 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:07:42.083 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:07:42.083 list of memzone associated elements. size: 646.796692 MiB 00:07:42.083 element at address: 0x20001d895500 with size: 211.416748 MiB 00:07:42.083 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:42.083 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:07:42.083 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:42.083 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:07:42.083 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69992_0 00:07:42.083 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:42.083 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69992_0 00:07:42.083 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:42.084 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69992_0 00:07:42.084 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:07:42.084 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69992_0 00:07:42.084 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:07:42.084 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:42.084 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:07:42.084 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:42.084 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:42.084 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69992 00:07:42.084 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:42.084 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69992 00:07:42.084 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:42.084 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69992 00:07:42.084 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:07:42.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:42.084 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:07:42.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:42.084 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:07:42.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:42.084 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:07:42.084 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:42.084 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:42.084 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69992 00:07:42.084 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:42.084 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69992 00:07:42.084 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:07:42.084 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69992 00:07:42.084 element at address: 0x200034afe940 with size: 1.000488 MiB 00:07:42.084 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69992 00:07:42.084 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:07:42.084 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69992 00:07:42.084 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:07:42.084 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69992 00:07:42.084 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:07:42.084 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:42.084 element at address: 0x20000707b780 with size: 0.500488 MiB 00:07:42.084 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:42.084 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:07:42.084 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:42.084 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:07:42.084 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69992 00:07:42.084 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:07:42.084 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:42.084 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:07:42.084 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:42.084 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:07:42.084 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69992 00:07:42.084 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:07:42.084 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:42.084 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:42.084 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69992 00:07:42.084 element at address: 0x200003aff940 with size: 0.000305 MiB 00:07:42.084 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69992 00:07:42.084 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:07:42.084 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69992 00:07:42.084 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:07:42.084 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:42.084 16:49:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:42.084 16:49:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69992 00:07:42.084 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69992 ']' 00:07:42.084 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69992 00:07:42.084 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:42.084 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.084 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69992 00:07:42.344 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.344 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.344 killing process with pid 69992 00:07:42.344 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69992' 00:07:42.344 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69992 00:07:42.344 16:49:11 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69992 00:07:42.604 00:07:42.604 real 0m1.700s 00:07:42.604 user 0m1.664s 00:07:42.604 sys 0m0.516s 00:07:42.604 16:49:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.604 16:49:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.604 ************************************ 00:07:42.604 END TEST dpdk_mem_utility 00:07:42.604 ************************************ 00:07:42.604 16:49:12 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:42.604 16:49:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.604 16:49:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.604 16:49:12 -- common/autotest_common.sh@10 -- # set +x 00:07:42.604 ************************************ 00:07:42.604 START TEST event 00:07:42.604 ************************************ 00:07:42.604 16:49:12 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:42.864 * Looking for test storage... 00:07:42.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:42.864 16:49:12 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:42.864 16:49:12 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:42.864 16:49:12 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:42.864 16:49:12 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:42.864 16:49:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.864 16:49:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.864 16:49:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.865 16:49:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.865 16:49:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.865 16:49:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.865 16:49:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.865 16:49:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.865 16:49:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.865 16:49:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.865 16:49:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.865 16:49:12 event -- scripts/common.sh@344 -- # case "$op" in 00:07:42.865 16:49:12 event -- scripts/common.sh@345 -- # : 1 00:07:42.865 16:49:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.865 16:49:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.865 16:49:12 event -- scripts/common.sh@365 -- # decimal 1 00:07:42.865 16:49:12 event -- scripts/common.sh@353 -- # local d=1 00:07:42.865 16:49:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.865 16:49:12 event -- scripts/common.sh@355 -- # echo 1 00:07:42.865 16:49:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.865 16:49:12 event -- scripts/common.sh@366 -- # decimal 2 00:07:42.865 16:49:12 event -- scripts/common.sh@353 -- # local d=2 00:07:42.865 16:49:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.865 16:49:12 event -- scripts/common.sh@355 -- # echo 2 00:07:42.865 16:49:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.865 16:49:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.865 16:49:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.865 16:49:12 event -- scripts/common.sh@368 -- # return 0 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:42.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.865 --rc genhtml_branch_coverage=1 00:07:42.865 --rc genhtml_function_coverage=1 00:07:42.865 --rc genhtml_legend=1 00:07:42.865 --rc geninfo_all_blocks=1 00:07:42.865 --rc geninfo_unexecuted_blocks=1 00:07:42.865 00:07:42.865 ' 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:42.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.865 --rc genhtml_branch_coverage=1 00:07:42.865 --rc genhtml_function_coverage=1 00:07:42.865 --rc genhtml_legend=1 00:07:42.865 --rc geninfo_all_blocks=1 00:07:42.865 --rc geninfo_unexecuted_blocks=1 00:07:42.865 00:07:42.865 ' 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:42.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.865 --rc genhtml_branch_coverage=1 00:07:42.865 --rc genhtml_function_coverage=1 00:07:42.865 --rc genhtml_legend=1 00:07:42.865 --rc geninfo_all_blocks=1 00:07:42.865 --rc geninfo_unexecuted_blocks=1 00:07:42.865 00:07:42.865 ' 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:42.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.865 --rc genhtml_branch_coverage=1 00:07:42.865 --rc genhtml_function_coverage=1 00:07:42.865 --rc genhtml_legend=1 00:07:42.865 --rc geninfo_all_blocks=1 00:07:42.865 --rc geninfo_unexecuted_blocks=1 00:07:42.865 00:07:42.865 ' 00:07:42.865 16:49:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:42.865 16:49:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:42.865 16:49:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:42.865 16:49:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.865 16:49:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.865 ************************************ 00:07:42.865 START TEST event_perf 00:07:42.865 ************************************ 00:07:42.865 16:49:12 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:42.865 Running I/O for 1 seconds...[2024-11-08 16:49:12.336373] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.865 [2024-11-08 16:49:12.336481] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70078 ] 00:07:43.125 [2024-11-08 16:49:12.494203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.125 [2024-11-08 16:49:12.546250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.125 [2024-11-08 16:49:12.546333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.125 [2024-11-08 16:49:12.546368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.125 [2024-11-08 16:49:12.546474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.091 Running I/O for 1 seconds... 00:07:44.091 lcore 0: 201848 00:07:44.091 lcore 1: 201848 00:07:44.091 lcore 2: 201848 00:07:44.091 lcore 3: 201848 00:07:44.350 done. 00:07:44.350 00:07:44.350 real 0m1.344s 00:07:44.350 user 0m4.123s 00:07:44.350 sys 0m0.101s 00:07:44.350 16:49:13 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.350 16:49:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:44.350 ************************************ 00:07:44.350 END TEST event_perf 00:07:44.350 ************************************ 00:07:44.350 16:49:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:44.350 16:49:13 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:44.350 16:49:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.350 16:49:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.350 ************************************ 00:07:44.350 START TEST event_reactor 00:07:44.350 ************************************ 00:07:44.350 16:49:13 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:44.350 [2024-11-08 16:49:13.754222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.350 [2024-11-08 16:49:13.754341] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70112 ] 00:07:44.609 [2024-11-08 16:49:13.913484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.609 [2024-11-08 16:49:13.956985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.544 test_start 00:07:45.544 oneshot 00:07:45.544 tick 100 00:07:45.544 tick 100 00:07:45.544 tick 250 00:07:45.544 tick 100 00:07:45.544 tick 100 00:07:45.544 tick 100 00:07:45.544 tick 250 00:07:45.544 tick 500 00:07:45.544 tick 100 00:07:45.544 tick 100 00:07:45.544 tick 250 00:07:45.544 tick 100 00:07:45.544 tick 100 00:07:45.544 test_end 00:07:45.544 00:07:45.544 real 0m1.339s 00:07:45.544 user 0m1.144s 00:07:45.544 sys 0m0.085s 00:07:45.544 16:49:15 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.544 16:49:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:45.544 ************************************ 00:07:45.544 END TEST event_reactor 00:07:45.544 ************************************ 00:07:45.803 16:49:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.803 16:49:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:45.803 16:49:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.803 16:49:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.803 ************************************ 00:07:45.803 START TEST event_reactor_perf 00:07:45.803 ************************************ 00:07:45.803 16:49:15 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.803 [2024-11-08 16:49:15.155576] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.803 [2024-11-08 16:49:15.155737] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70154 ] 00:07:45.803 [2024-11-08 16:49:15.315969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.062 [2024-11-08 16:49:15.364906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.998 test_start 00:07:46.998 test_end 00:07:46.998 Performance: 397471 events per second 00:07:46.998 00:07:46.998 real 0m1.343s 00:07:46.998 user 0m1.140s 00:07:46.998 sys 0m0.096s 00:07:46.998 16:49:16 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.998 16:49:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 ************************************ 00:07:46.998 END TEST event_reactor_perf 00:07:46.998 ************************************ 00:07:46.998 16:49:16 event -- event/event.sh@49 -- # uname -s 00:07:46.998 16:49:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:46.998 16:49:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:46.998 16:49:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.998 16:49:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.998 16:49:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 ************************************ 00:07:46.998 START TEST event_scheduler 00:07:46.998 ************************************ 00:07:46.998 16:49:16 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:47.257 * Looking for test storage... 00:07:47.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:47.257 16:49:16 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.257 16:49:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.257 16:49:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.257 16:49:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:47.257 16:49:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.258 16:49:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.258 16:49:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.258 16:49:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.258 --rc genhtml_branch_coverage=1 00:07:47.258 --rc genhtml_function_coverage=1 00:07:47.258 --rc genhtml_legend=1 00:07:47.258 --rc geninfo_all_blocks=1 00:07:47.258 --rc geninfo_unexecuted_blocks=1 00:07:47.258 00:07:47.258 ' 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.258 --rc genhtml_branch_coverage=1 00:07:47.258 --rc genhtml_function_coverage=1 00:07:47.258 --rc genhtml_legend=1 00:07:47.258 --rc geninfo_all_blocks=1 00:07:47.258 --rc geninfo_unexecuted_blocks=1 00:07:47.258 00:07:47.258 ' 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.258 --rc genhtml_branch_coverage=1 00:07:47.258 --rc genhtml_function_coverage=1 00:07:47.258 --rc genhtml_legend=1 00:07:47.258 --rc geninfo_all_blocks=1 00:07:47.258 --rc geninfo_unexecuted_blocks=1 00:07:47.258 00:07:47.258 ' 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.258 --rc genhtml_branch_coverage=1 00:07:47.258 --rc genhtml_function_coverage=1 00:07:47.258 --rc genhtml_legend=1 00:07:47.258 --rc geninfo_all_blocks=1 00:07:47.258 --rc geninfo_unexecuted_blocks=1 00:07:47.258 00:07:47.258 ' 00:07:47.258 16:49:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:47.258 16:49:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:47.258 16:49:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70219 00:07:47.258 16:49:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.258 16:49:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70219 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70219 ']' 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.258 16:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.518 [2024-11-08 16:49:16.832783] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.518 [2024-11-08 16:49:16.832920] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:07:47.518 [2024-11-08 16:49:16.986067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.518 [2024-11-08 16:49:17.036718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.518 [2024-11-08 16:49:17.036883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.518 [2024-11-08 16:49:17.036949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.518 [2024-11-08 16:49:17.037017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.456 16:49:17 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.456 16:49:17 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:48.456 16:49:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:48.456 16:49:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.456 16:49:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:48.457 POWER: Cannot set governor of lcore 0 to userspace 00:07:48.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:48.457 POWER: Cannot set governor of lcore 0 to performance 00:07:48.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:48.457 POWER: Cannot set governor of lcore 0 to userspace 00:07:48.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:48.457 POWER: Cannot set governor of lcore 0 to userspace 00:07:48.457 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:48.457 POWER: Unable to set Power Management Environment for lcore 0 00:07:48.457 [2024-11-08 16:49:17.653218] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:48.457 [2024-11-08 16:49:17.653240] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:48.457 [2024-11-08 16:49:17.653254] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:48.457 [2024-11-08 16:49:17.653282] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:48.457 [2024-11-08 16:49:17.653292] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:48.457 [2024-11-08 16:49:17.653301] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 [2024-11-08 16:49:17.727110] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 ************************************ 00:07:48.457 START TEST scheduler_create_thread 00:07:48.457 ************************************ 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 2 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 3 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 4 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 5 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 6 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 7 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 8 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.457 9 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.457 16:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.838 10 00:07:49.838 16:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.838 16:49:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:49.838 16:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.838 16:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.219 16:49:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.219 16:49:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:51.219 16:49:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:51.219 16:49:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.219 16:49:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.786 16:49:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.786 16:49:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:51.786 16:49:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.786 16:49:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.393 16:49:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.393 16:49:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:52.393 16:49:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:52.393 16:49:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.393 16:49:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.332 ************************************ 00:07:53.332 END TEST scheduler_create_thread 00:07:53.332 ************************************ 00:07:53.332 16:49:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.332 00:07:53.332 real 0m4.779s 00:07:53.332 user 0m0.029s 00:07:53.332 sys 0m0.008s 00:07:53.332 16:49:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.332 16:49:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.332 16:49:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:53.332 16:49:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70219 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70219 ']' 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70219 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70219 00:07:53.332 killing process with pid 70219 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70219' 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70219 00:07:53.332 16:49:22 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70219 00:07:53.332 [2024-11-08 16:49:22.797581] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:53.592 ************************************ 00:07:53.592 00:07:53.592 real 0m6.561s 00:07:53.592 user 0m14.171s 00:07:53.592 sys 0m0.493s 00:07:53.592 16:49:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.592 16:49:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:53.592 END TEST event_scheduler 00:07:53.592 ************************************ 00:07:53.852 16:49:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:53.852 16:49:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:53.852 16:49:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.852 16:49:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.852 16:49:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.852 ************************************ 00:07:53.852 START TEST app_repeat 00:07:53.852 ************************************ 00:07:53.852 16:49:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70342 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70342' 00:07:53.852 Process app_repeat pid: 70342 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:53.852 spdk_app_start Round 0 00:07:53.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:53.852 16:49:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70342 /var/tmp/spdk-nbd.sock 00:07:53.852 16:49:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70342 ']' 00:07:53.852 16:49:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.852 16:49:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.852 16:49:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.853 16:49:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.853 16:49:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.853 [2024-11-08 16:49:23.220523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.853 [2024-11-08 16:49:23.220669] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70342 ] 00:07:54.113 [2024-11-08 16:49:23.381522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.113 [2024-11-08 16:49:23.427048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.113 [2024-11-08 16:49:23.427176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.681 16:49:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.681 16:49:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:54.681 16:49:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:54.941 Malloc0 00:07:54.941 16:49:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.201 Malloc1 00:07:55.201 16:49:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:55.201 /dev/nbd0 00:07:55.201 16:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.460 16:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.460 1+0 records in 00:07:55.460 1+0 records out 00:07:55.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489738 s, 8.4 MB/s 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:55.460 16:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.460 16:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.460 16:49:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:55.460 /dev/nbd1 00:07:55.460 16:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:55.460 16:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:55.460 16:49:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:55.461 16:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.461 16:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.461 16:49:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.461 1+0 records in 00:07:55.461 1+0 records out 00:07:55.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444704 s, 9.2 MB/s 00:07:55.719 16:49:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.719 16:49:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:55.719 16:49:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:55.719 16:49:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.719 16:49:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:55.719 16:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.719 16:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.719 16:49:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.719 16:49:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.719 16:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.719 16:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:55.719 { 00:07:55.719 "nbd_device": "/dev/nbd0", 00:07:55.719 "bdev_name": "Malloc0" 00:07:55.719 }, 00:07:55.719 { 00:07:55.719 "nbd_device": "/dev/nbd1", 00:07:55.719 "bdev_name": "Malloc1" 00:07:55.719 } 00:07:55.719 ]' 00:07:55.719 16:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:55.719 { 00:07:55.719 "nbd_device": "/dev/nbd0", 00:07:55.719 "bdev_name": "Malloc0" 00:07:55.719 }, 00:07:55.719 { 00:07:55.719 "nbd_device": "/dev/nbd1", 00:07:55.719 "bdev_name": "Malloc1" 00:07:55.719 } 00:07:55.719 ]' 00:07:55.719 16:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.978 16:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:55.978 /dev/nbd1' 00:07:55.978 16:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:55.978 /dev/nbd1' 00:07:55.978 16:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:55.979 256+0 records in 00:07:55.979 256+0 records out 00:07:55.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134945 s, 77.7 MB/s 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:55.979 256+0 records in 00:07:55.979 256+0 records out 00:07:55.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215205 s, 48.7 MB/s 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:55.979 256+0 records in 00:07:55.979 256+0 records out 00:07:55.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193098 s, 54.3 MB/s 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.979 16:49:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.238 16:49:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.498 16:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.498 16:49:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.498 16:49:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:56.757 16:49:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:56.757 16:49:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:57.016 16:49:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:57.016 [2024-11-08 16:49:26.453428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.016 [2024-11-08 16:49:26.497042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.016 [2024-11-08 16:49:26.497052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.016 [2024-11-08 16:49:26.538451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:57.016 [2024-11-08 16:49:26.538547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:00.305 spdk_app_start Round 1 00:08:00.305 16:49:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.305 16:49:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:00.305 16:49:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70342 /var/tmp/spdk-nbd.sock 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70342 ']' 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.305 16:49:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:00.305 16:49:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.305 Malloc0 00:08:00.305 16:49:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.563 Malloc1 00:08:00.563 16:49:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.563 16:49:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.564 16:49:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:00.823 /dev/nbd0 00:08:00.823 16:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:00.823 16:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:00.823 1+0 records in 00:08:00.823 1+0 records out 00:08:00.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391457 s, 10.5 MB/s 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.823 16:49:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:00.823 16:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.823 16:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.823 16:49:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:01.082 /dev/nbd1 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.082 1+0 records in 00:08:01.082 1+0 records out 00:08:01.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418244 s, 9.8 MB/s 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.082 16:49:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.082 16:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:01.374 { 00:08:01.374 "nbd_device": "/dev/nbd0", 00:08:01.374 "bdev_name": "Malloc0" 00:08:01.374 }, 00:08:01.374 { 00:08:01.374 "nbd_device": "/dev/nbd1", 00:08:01.374 "bdev_name": "Malloc1" 00:08:01.374 } 00:08:01.374 ]' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:01.374 { 00:08:01.374 "nbd_device": "/dev/nbd0", 00:08:01.374 "bdev_name": "Malloc0" 00:08:01.374 }, 00:08:01.374 { 00:08:01.374 "nbd_device": "/dev/nbd1", 00:08:01.374 "bdev_name": "Malloc1" 00:08:01.374 } 00:08:01.374 ]' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:01.374 /dev/nbd1' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:01.374 /dev/nbd1' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:01.374 256+0 records in 00:08:01.374 256+0 records out 00:08:01.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138397 s, 75.8 MB/s 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:01.374 256+0 records in 00:08:01.374 256+0 records out 00:08:01.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020574 s, 51.0 MB/s 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:01.374 256+0 records in 00:08:01.374 256+0 records out 00:08:01.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289223 s, 36.3 MB/s 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.374 16:49:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.637 16:49:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.896 16:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.155 16:49:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.155 16:49:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:02.414 16:49:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:02.414 [2024-11-08 16:49:31.902467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.673 [2024-11-08 16:49:31.945810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.673 [2024-11-08 16:49:31.945840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.673 [2024-11-08 16:49:31.986681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:02.673 [2024-11-08 16:49:31.986742] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:05.965 16:49:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:05.965 spdk_app_start Round 2 00:08:05.965 16:49:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:05.965 16:49:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70342 /var/tmp/spdk-nbd.sock 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70342 ']' 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.965 16:49:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:05.965 16:49:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.965 Malloc0 00:08:05.965 16:49:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.965 Malloc1 00:08:05.965 16:49:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.965 16:49:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:06.224 /dev/nbd0 00:08:06.224 16:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:06.224 16:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:06.224 1+0 records in 00:08:06.224 1+0 records out 00:08:06.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189122 s, 21.7 MB/s 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.224 16:49:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:06.224 16:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.224 16:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.225 16:49:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:06.485 /dev/nbd1 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:06.485 1+0 records in 00:08:06.485 1+0 records out 00:08:06.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228481 s, 17.9 MB/s 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.485 16:49:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.485 16:49:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.745 { 00:08:06.745 "nbd_device": "/dev/nbd0", 00:08:06.745 "bdev_name": "Malloc0" 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "nbd_device": "/dev/nbd1", 00:08:06.745 "bdev_name": "Malloc1" 00:08:06.745 } 00:08:06.745 ]' 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.745 { 00:08:06.745 "nbd_device": "/dev/nbd0", 00:08:06.745 "bdev_name": "Malloc0" 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "nbd_device": "/dev/nbd1", 00:08:06.745 "bdev_name": "Malloc1" 00:08:06.745 } 00:08:06.745 ]' 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.745 /dev/nbd1' 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.745 /dev/nbd1' 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.745 16:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:06.746 256+0 records in 00:08:06.746 256+0 records out 00:08:06.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012206 s, 85.9 MB/s 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.746 256+0 records in 00:08:06.746 256+0 records out 00:08:06.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197101 s, 53.2 MB/s 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.746 256+0 records in 00:08:06.746 256+0 records out 00:08:06.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193797 s, 54.1 MB/s 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.746 16:49:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.006 16:49:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.266 16:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:07.526 16:49:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:07.526 16:49:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:07.787 16:49:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:07.787 [2024-11-08 16:49:37.302258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.048 [2024-11-08 16:49:37.350444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.048 [2024-11-08 16:49:37.350453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.048 [2024-11-08 16:49:37.393107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:08.048 [2024-11-08 16:49:37.393171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:10.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:10.664 16:49:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70342 /var/tmp/spdk-nbd.sock 00:08:10.664 16:49:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70342 ']' 00:08:10.664 16:49:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:10.664 16:49:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.664 16:49:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:10.664 16:49:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.664 16:49:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:10.924 16:49:40 event.app_repeat -- event/event.sh@39 -- # killprocess 70342 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70342 ']' 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70342 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70342 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70342' 00:08:10.924 killing process with pid 70342 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70342 00:08:10.924 16:49:40 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70342 00:08:11.184 spdk_app_start is called in Round 0. 00:08:11.184 Shutdown signal received, stop current app iteration 00:08:11.184 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:08:11.184 spdk_app_start is called in Round 1. 00:08:11.184 Shutdown signal received, stop current app iteration 00:08:11.184 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:08:11.184 spdk_app_start is called in Round 2. 00:08:11.184 Shutdown signal received, stop current app iteration 00:08:11.184 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:08:11.184 spdk_app_start is called in Round 3. 00:08:11.184 Shutdown signal received, stop current app iteration 00:08:11.184 16:49:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:11.184 16:49:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:11.184 00:08:11.184 real 0m17.427s 00:08:11.184 user 0m38.306s 00:08:11.184 sys 0m2.723s 00:08:11.184 16:49:40 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.184 16:49:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 ************************************ 00:08:11.184 END TEST app_repeat 00:08:11.184 ************************************ 00:08:11.184 16:49:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:11.184 16:49:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:11.184 16:49:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.184 16:49:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.184 16:49:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:11.184 ************************************ 00:08:11.184 START TEST cpu_locks 00:08:11.184 ************************************ 00:08:11.184 16:49:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:11.444 * Looking for test storage... 00:08:11.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.444 16:49:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.444 --rc genhtml_branch_coverage=1 00:08:11.444 --rc genhtml_function_coverage=1 00:08:11.444 --rc genhtml_legend=1 00:08:11.444 --rc geninfo_all_blocks=1 00:08:11.444 --rc geninfo_unexecuted_blocks=1 00:08:11.444 00:08:11.444 ' 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.444 --rc genhtml_branch_coverage=1 00:08:11.444 --rc genhtml_function_coverage=1 00:08:11.444 --rc genhtml_legend=1 00:08:11.444 --rc geninfo_all_blocks=1 00:08:11.444 --rc geninfo_unexecuted_blocks=1 00:08:11.444 00:08:11.444 ' 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.444 --rc genhtml_branch_coverage=1 00:08:11.444 --rc genhtml_function_coverage=1 00:08:11.444 --rc genhtml_legend=1 00:08:11.444 --rc geninfo_all_blocks=1 00:08:11.444 --rc geninfo_unexecuted_blocks=1 00:08:11.444 00:08:11.444 ' 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:11.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.444 --rc genhtml_branch_coverage=1 00:08:11.444 --rc genhtml_function_coverage=1 00:08:11.444 --rc genhtml_legend=1 00:08:11.444 --rc geninfo_all_blocks=1 00:08:11.444 --rc geninfo_unexecuted_blocks=1 00:08:11.444 00:08:11.444 ' 00:08:11.444 16:49:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:11.444 16:49:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:11.444 16:49:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:11.444 16:49:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.444 16:49:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.445 16:49:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:11.445 ************************************ 00:08:11.445 START TEST default_locks 00:08:11.445 ************************************ 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70767 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70767 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70767 ']' 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.445 16:49:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:11.445 [2024-11-08 16:49:40.957791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.445 [2024-11-08 16:49:40.957977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70767 ] 00:08:11.705 [2024-11-08 16:49:41.122552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.705 [2024-11-08 16:49:41.171445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.275 16:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.275 16:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:12.275 16:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70767 00:08:12.275 16:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70767 00:08:12.275 16:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70767 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70767 ']' 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70767 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70767 00:08:12.843 killing process with pid 70767 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70767' 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70767 00:08:12.843 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70767 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70767 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70767 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70767 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70767 ']' 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70767) - No such process 00:08:13.414 ERROR: process (pid: 70767) is no longer running 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:13.414 00:08:13.414 real 0m1.782s 00:08:13.414 user 0m1.756s 00:08:13.414 sys 0m0.616s 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.414 16:49:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 ************************************ 00:08:13.414 END TEST default_locks 00:08:13.414 ************************************ 00:08:13.414 16:49:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:13.414 16:49:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.414 16:49:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.414 16:49:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 ************************************ 00:08:13.414 START TEST default_locks_via_rpc 00:08:13.414 ************************************ 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70820 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70820 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70820 ']' 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.414 16:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 [2024-11-08 16:49:42.815517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.414 [2024-11-08 16:49:42.815694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70820 ] 00:08:13.673 [2024-11-08 16:49:42.975933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.673 [2024-11-08 16:49:43.025648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70820 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70820 00:08:14.242 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70820 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70820 ']' 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70820 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70820 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.502 killing process with pid 70820 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70820' 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70820 00:08:14.502 16:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70820 00:08:15.073 00:08:15.073 real 0m1.648s 00:08:15.073 user 0m1.657s 00:08:15.073 sys 0m0.535s 00:08:15.073 16:49:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.073 16:49:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.073 ************************************ 00:08:15.073 END TEST default_locks_via_rpc 00:08:15.073 ************************************ 00:08:15.073 16:49:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:15.073 16:49:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.073 16:49:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.073 16:49:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.073 ************************************ 00:08:15.073 START TEST non_locking_app_on_locked_coremask 00:08:15.073 ************************************ 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70866 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70866 /var/tmp/spdk.sock 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70866 ']' 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.073 16:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.073 [2024-11-08 16:49:44.524227] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.073 [2024-11-08 16:49:44.524376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70866 ] 00:08:15.333 [2024-11-08 16:49:44.683735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.333 [2024-11-08 16:49:44.732080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70877 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70877 /var/tmp/spdk2.sock 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70877 ']' 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.901 16:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.161 [2024-11-08 16:49:45.447330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.161 [2024-11-08 16:49:45.447455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:08:16.161 [2024-11-08 16:49:45.597247] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:16.161 [2024-11-08 16:49:45.597317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.433 [2024-11-08 16:49:45.691148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70866 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70866 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70866 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70866 ']' 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70866 00:08:17.003 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:17.262 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.263 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70866 00:08:17.263 killing process with pid 70866 00:08:17.263 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.263 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.263 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70866' 00:08:17.263 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70866 00:08:17.263 16:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70866 00:08:17.832 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70877 00:08:17.832 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70877 ']' 00:08:17.832 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70877 00:08:17.832 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:17.832 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.832 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70877 00:08:18.091 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.091 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.091 killing process with pid 70877 00:08:18.091 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70877' 00:08:18.091 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70877 00:08:18.091 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70877 00:08:18.351 00:08:18.351 real 0m3.324s 00:08:18.351 user 0m3.489s 00:08:18.351 sys 0m0.966s 00:08:18.351 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.351 16:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.351 ************************************ 00:08:18.351 END TEST non_locking_app_on_locked_coremask 00:08:18.351 ************************************ 00:08:18.351 16:49:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:18.351 16:49:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.351 16:49:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.351 16:49:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.351 ************************************ 00:08:18.351 START TEST locking_app_on_unlocked_coremask 00:08:18.351 ************************************ 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70946 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70946 /var/tmp/spdk.sock 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70946 ']' 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.351 16:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.610 [2024-11-08 16:49:47.912022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.611 [2024-11-08 16:49:47.912763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70946 ] 00:08:18.611 [2024-11-08 16:49:48.072270] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:18.611 [2024-11-08 16:49:48.072346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.611 [2024-11-08 16:49:48.116084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.550 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.550 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70961 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70961 /var/tmp/spdk2.sock 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70961 ']' 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.551 16:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:19.551 [2024-11-08 16:49:48.812820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.551 [2024-11-08 16:49:48.812945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:08:19.551 [2024-11-08 16:49:48.963072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.551 [2024-11-08 16:49:49.058797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.119 16:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.119 16:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:20.119 16:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70961 00:08:20.119 16:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70961 00:08:20.119 16:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70946 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70946 ']' 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70946 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70946 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.687 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.946 killing process with pid 70946 00:08:20.946 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70946' 00:08:20.946 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70946 00:08:20.946 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70946 00:08:21.515 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70961 00:08:21.515 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70961 ']' 00:08:21.515 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70961 00:08:21.515 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:21.515 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.515 16:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70961 00:08:21.515 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.515 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.515 killing process with pid 70961 00:08:21.515 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70961' 00:08:21.515 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70961 00:08:21.515 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70961 00:08:22.084 00:08:22.084 real 0m3.578s 00:08:22.084 user 0m3.715s 00:08:22.084 sys 0m1.117s 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.084 ************************************ 00:08:22.084 END TEST locking_app_on_unlocked_coremask 00:08:22.084 ************************************ 00:08:22.084 16:49:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:22.084 16:49:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.084 16:49:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.084 16:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.084 ************************************ 00:08:22.084 START TEST locking_app_on_locked_coremask 00:08:22.084 ************************************ 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71022 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71022 /var/tmp/spdk.sock 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71022 ']' 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.084 16:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.084 [2024-11-08 16:49:51.551964] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.084 [2024-11-08 16:49:51.552108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:08:22.344 [2024-11-08 16:49:51.712378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.344 [2024-11-08 16:49:51.758113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.913 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.913 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71038 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71038 /var/tmp/spdk2.sock 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71038 /var/tmp/spdk2.sock 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71038 /var/tmp/spdk2.sock 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71038 ']' 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.914 16:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.174 [2024-11-08 16:49:52.453892] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.174 [2024-11-08 16:49:52.454015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:08:23.174 [2024-11-08 16:49:52.601433] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71022 has claimed it. 00:08:23.174 [2024-11-08 16:49:52.601520] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:23.743 ERROR: process (pid: 71038) is no longer running 00:08:23.743 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71038) - No such process 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71022 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71022 00:08:23.743 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71022 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71022 ']' 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71022 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71022 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.312 killing process with pid 71022 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71022' 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71022 00:08:24.312 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71022 00:08:24.571 00:08:24.571 real 0m2.522s 00:08:24.571 user 0m2.687s 00:08:24.571 sys 0m0.777s 00:08:24.571 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.571 16:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.571 ************************************ 00:08:24.571 END TEST locking_app_on_locked_coremask 00:08:24.571 ************************************ 00:08:24.571 16:49:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:24.571 16:49:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.571 16:49:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.571 16:49:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.571 ************************************ 00:08:24.571 START TEST locking_overlapped_coremask 00:08:24.571 ************************************ 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71091 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71091 /var/tmp/spdk.sock 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71091 ']' 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.571 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.831 [2024-11-08 16:49:54.131798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.831 [2024-11-08 16:49:54.131934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71091 ] 00:08:24.831 [2024-11-08 16:49:54.291613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.831 [2024-11-08 16:49:54.337618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.831 [2024-11-08 16:49:54.337683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.831 [2024-11-08 16:49:54.337761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71109 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71109 /var/tmp/spdk2.sock 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71109 /var/tmp/spdk2.sock 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:25.769 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71109 /var/tmp/spdk2.sock 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71109 ']' 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.770 16:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.770 [2024-11-08 16:49:55.080325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.770 [2024-11-08 16:49:55.080455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71109 ] 00:08:25.770 [2024-11-08 16:49:55.235750] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71091 has claimed it. 00:08:25.770 [2024-11-08 16:49:55.235812] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:26.339 ERROR: process (pid: 71109) is no longer running 00:08:26.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71109) - No such process 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71091 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71091 ']' 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71091 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71091 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.339 killing process with pid 71091 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71091' 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71091 00:08:26.339 16:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71091 00:08:26.640 00:08:26.640 real 0m2.078s 00:08:26.640 user 0m5.548s 00:08:26.640 sys 0m0.505s 00:08:26.640 16:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.640 16:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.640 ************************************ 00:08:26.640 END TEST locking_overlapped_coremask 00:08:26.640 ************************************ 00:08:26.921 16:49:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:26.921 16:49:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.921 16:49:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.921 16:49:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 ************************************ 00:08:26.921 START TEST locking_overlapped_coremask_via_rpc 00:08:26.921 ************************************ 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71151 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71151 /var/tmp/spdk.sock 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71151 ']' 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.921 16:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 [2024-11-08 16:49:56.269660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.921 [2024-11-08 16:49:56.269796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71151 ] 00:08:26.921 [2024-11-08 16:49:56.429177] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:26.921 [2024-11-08 16:49:56.429242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.194 [2024-11-08 16:49:56.479139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.194 [2024-11-08 16:49:56.479188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.194 [2024-11-08 16:49:56.479326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71169 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71169 /var/tmp/spdk2.sock 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71169 ']' 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.764 16:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.764 [2024-11-08 16:49:57.179281] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.764 [2024-11-08 16:49:57.179738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:08:28.023 [2024-11-08 16:49:57.330587] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.024 [2024-11-08 16:49:57.330652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.024 [2024-11-08 16:49:57.434283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.024 [2024-11-08 16:49:57.437811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.024 [2024-11-08 16:49:57.437938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.593 [2024-11-08 16:49:58.040864] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71151 has claimed it. 00:08:28.593 request: 00:08:28.593 { 00:08:28.593 "method": "framework_enable_cpumask_locks", 00:08:28.593 "req_id": 1 00:08:28.593 } 00:08:28.593 Got JSON-RPC error response 00:08:28.593 response: 00:08:28.593 { 00:08:28.593 "code": -32603, 00:08:28.593 "message": "Failed to claim CPU core: 2" 00:08:28.593 } 00:08:28.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71151 /var/tmp/spdk.sock 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71151 ']' 00:08:28.593 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.594 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.594 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.594 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.594 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71169 /var/tmp/spdk2.sock 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71169 ']' 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.854 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:29.114 00:08:29.114 real 0m2.297s 00:08:29.114 user 0m1.078s 00:08:29.114 sys 0m0.151s 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.114 16:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.114 ************************************ 00:08:29.114 END TEST locking_overlapped_coremask_via_rpc 00:08:29.114 ************************************ 00:08:29.114 16:49:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:29.114 16:49:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71151 ]] 00:08:29.114 16:49:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71151 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71151 ']' 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71151 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71151 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.114 killing process with pid 71151 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71151' 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71151 00:08:29.114 16:49:58 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71151 00:08:29.683 16:49:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71169 ]] 00:08:29.683 16:49:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71169 00:08:29.683 16:49:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71169 ']' 00:08:29.683 16:49:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71169 00:08:29.683 16:49:58 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:29.683 16:49:58 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.683 16:49:58 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71169 00:08:29.683 killing process with pid 71169 00:08:29.683 16:49:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:29.683 16:49:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:29.683 16:49:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71169' 00:08:29.683 16:49:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71169 00:08:29.683 16:49:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71169 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71151 ]] 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71151 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71151 ']' 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71151 00:08:30.255 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71151) - No such process 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71151 is not found' 00:08:30.255 Process with pid 71151 is not found 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71169 ]] 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71169 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71169 ']' 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71169 00:08:30.255 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71169) - No such process 00:08:30.255 Process with pid 71169 is not found 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71169 is not found' 00:08:30.255 16:49:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:30.255 00:08:30.255 real 0m19.050s 00:08:30.255 user 0m32.039s 00:08:30.255 sys 0m5.778s 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.255 16:49:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.255 ************************************ 00:08:30.255 END TEST cpu_locks 00:08:30.255 ************************************ 00:08:30.255 00:08:30.255 real 0m47.681s 00:08:30.255 user 1m31.165s 00:08:30.255 sys 0m9.669s 00:08:30.255 16:49:59 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.255 16:49:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.255 ************************************ 00:08:30.255 END TEST event 00:08:30.255 ************************************ 00:08:30.515 16:49:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:30.515 16:49:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.515 16:49:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.515 16:49:59 -- common/autotest_common.sh@10 -- # set +x 00:08:30.515 ************************************ 00:08:30.515 START TEST thread 00:08:30.515 ************************************ 00:08:30.515 16:49:59 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:30.515 * Looking for test storage... 00:08:30.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:30.515 16:49:59 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:30.515 16:49:59 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:08:30.515 16:49:59 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:30.515 16:50:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:30.515 16:50:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.515 16:50:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.515 16:50:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.515 16:50:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.515 16:50:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.515 16:50:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.515 16:50:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.515 16:50:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.515 16:50:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.515 16:50:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.515 16:50:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.515 16:50:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:30.515 16:50:00 thread -- scripts/common.sh@345 -- # : 1 00:08:30.515 16:50:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.515 16:50:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.515 16:50:00 thread -- scripts/common.sh@365 -- # decimal 1 00:08:30.515 16:50:00 thread -- scripts/common.sh@353 -- # local d=1 00:08:30.515 16:50:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.515 16:50:00 thread -- scripts/common.sh@355 -- # echo 1 00:08:30.515 16:50:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.515 16:50:00 thread -- scripts/common.sh@366 -- # decimal 2 00:08:30.515 16:50:00 thread -- scripts/common.sh@353 -- # local d=2 00:08:30.515 16:50:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.515 16:50:00 thread -- scripts/common.sh@355 -- # echo 2 00:08:30.515 16:50:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.515 16:50:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.515 16:50:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.515 16:50:00 thread -- scripts/common.sh@368 -- # return 0 00:08:30.515 16:50:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.515 16:50:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:30.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.515 --rc genhtml_branch_coverage=1 00:08:30.515 --rc genhtml_function_coverage=1 00:08:30.515 --rc genhtml_legend=1 00:08:30.515 --rc geninfo_all_blocks=1 00:08:30.515 --rc geninfo_unexecuted_blocks=1 00:08:30.515 00:08:30.515 ' 00:08:30.515 16:50:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:30.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.515 --rc genhtml_branch_coverage=1 00:08:30.515 --rc genhtml_function_coverage=1 00:08:30.515 --rc genhtml_legend=1 00:08:30.515 --rc geninfo_all_blocks=1 00:08:30.515 --rc geninfo_unexecuted_blocks=1 00:08:30.515 00:08:30.515 ' 00:08:30.515 16:50:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:30.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.515 --rc genhtml_branch_coverage=1 00:08:30.515 --rc genhtml_function_coverage=1 00:08:30.515 --rc genhtml_legend=1 00:08:30.515 --rc geninfo_all_blocks=1 00:08:30.515 --rc geninfo_unexecuted_blocks=1 00:08:30.515 00:08:30.515 ' 00:08:30.515 16:50:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:30.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.515 --rc genhtml_branch_coverage=1 00:08:30.515 --rc genhtml_function_coverage=1 00:08:30.515 --rc genhtml_legend=1 00:08:30.515 --rc geninfo_all_blocks=1 00:08:30.515 --rc geninfo_unexecuted_blocks=1 00:08:30.515 00:08:30.515 ' 00:08:30.515 16:50:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.516 16:50:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:30.516 16:50:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.516 16:50:00 thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.516 ************************************ 00:08:30.516 START TEST thread_poller_perf 00:08:30.516 ************************************ 00:08:30.516 16:50:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.776 [2024-11-08 16:50:00.079385] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:30.776 [2024-11-08 16:50:00.079504] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71301 ] 00:08:30.776 [2024-11-08 16:50:00.239813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.776 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:30.776 [2024-11-08 16:50:00.285422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.157 [2024-11-08T16:50:01.685Z] ====================================== 00:08:32.157 [2024-11-08T16:50:01.685Z] busy:2301709570 (cyc) 00:08:32.157 [2024-11-08T16:50:01.685Z] total_run_count: 404000 00:08:32.157 [2024-11-08T16:50:01.685Z] tsc_hz: 2290000000 (cyc) 00:08:32.157 [2024-11-08T16:50:01.685Z] ====================================== 00:08:32.157 [2024-11-08T16:50:01.685Z] poller_cost: 5697 (cyc), 2487 (nsec) 00:08:32.157 00:08:32.157 real 0m1.348s 00:08:32.157 user 0m1.147s 00:08:32.157 sys 0m0.095s 00:08:32.157 16:50:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.157 16:50:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.157 ************************************ 00:08:32.157 END TEST thread_poller_perf 00:08:32.157 ************************************ 00:08:32.157 16:50:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:32.157 16:50:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:32.157 16:50:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.157 16:50:01 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.157 ************************************ 00:08:32.157 START TEST thread_poller_perf 00:08:32.157 ************************************ 00:08:32.157 16:50:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:32.157 [2024-11-08 16:50:01.492033] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:32.157 [2024-11-08 16:50:01.492170] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71338 ] 00:08:32.157 [2024-11-08 16:50:01.651128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.416 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:32.416 [2024-11-08 16:50:01.696684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.355 [2024-11-08T16:50:02.883Z] ====================================== 00:08:33.355 [2024-11-08T16:50:02.883Z] busy:2293728498 (cyc) 00:08:33.355 [2024-11-08T16:50:02.883Z] total_run_count: 5298000 00:08:33.355 [2024-11-08T16:50:02.883Z] tsc_hz: 2290000000 (cyc) 00:08:33.355 [2024-11-08T16:50:02.883Z] ====================================== 00:08:33.355 [2024-11-08T16:50:02.883Z] poller_cost: 432 (cyc), 188 (nsec) 00:08:33.355 00:08:33.355 real 0m1.342s 00:08:33.355 user 0m1.134s 00:08:33.355 sys 0m0.102s 00:08:33.355 16:50:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.356 16:50:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.356 ************************************ 00:08:33.356 END TEST thread_poller_perf 00:08:33.356 ************************************ 00:08:33.356 16:50:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:33.356 00:08:33.356 real 0m3.038s 00:08:33.356 user 0m2.423s 00:08:33.356 sys 0m0.422s 00:08:33.356 16:50:02 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.356 16:50:02 thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.356 ************************************ 00:08:33.356 END TEST thread 00:08:33.356 ************************************ 00:08:33.616 16:50:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:33.616 16:50:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:33.616 16:50:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.616 16:50:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.616 16:50:02 -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 ************************************ 00:08:33.616 START TEST app_cmdline 00:08:33.616 ************************************ 00:08:33.616 16:50:02 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:33.616 * Looking for test storage... 00:08:33.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:33.616 16:50:03 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:33.616 16:50:03 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:08:33.616 16:50:03 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:33.616 16:50:03 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.616 16:50:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.617 16:50:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:33.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.617 --rc genhtml_branch_coverage=1 00:08:33.617 --rc genhtml_function_coverage=1 00:08:33.617 --rc genhtml_legend=1 00:08:33.617 --rc geninfo_all_blocks=1 00:08:33.617 --rc geninfo_unexecuted_blocks=1 00:08:33.617 00:08:33.617 ' 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:33.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.617 --rc genhtml_branch_coverage=1 00:08:33.617 --rc genhtml_function_coverage=1 00:08:33.617 --rc genhtml_legend=1 00:08:33.617 --rc geninfo_all_blocks=1 00:08:33.617 --rc geninfo_unexecuted_blocks=1 00:08:33.617 00:08:33.617 ' 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:33.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.617 --rc genhtml_branch_coverage=1 00:08:33.617 --rc genhtml_function_coverage=1 00:08:33.617 --rc genhtml_legend=1 00:08:33.617 --rc geninfo_all_blocks=1 00:08:33.617 --rc geninfo_unexecuted_blocks=1 00:08:33.617 00:08:33.617 ' 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:33.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.617 --rc genhtml_branch_coverage=1 00:08:33.617 --rc genhtml_function_coverage=1 00:08:33.617 --rc genhtml_legend=1 00:08:33.617 --rc geninfo_all_blocks=1 00:08:33.617 --rc geninfo_unexecuted_blocks=1 00:08:33.617 00:08:33.617 ' 00:08:33.617 16:50:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:33.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.617 16:50:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71421 00:08:33.617 16:50:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71421 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71421 ']' 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.617 16:50:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.617 16:50:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 [2024-11-08 16:50:03.224757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:33.877 [2024-11-08 16:50:03.224886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71421 ] 00:08:33.877 [2024-11-08 16:50:03.384301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.137 [2024-11-08 16:50:03.429734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.706 16:50:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.706 16:50:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:34.706 16:50:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:34.706 { 00:08:34.706 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:08:34.706 "fields": { 00:08:34.706 "major": 24, 00:08:34.706 "minor": 9, 00:08:34.706 "patch": 1, 00:08:34.707 "suffix": "-pre", 00:08:34.707 "commit": "b18e1bd62" 00:08:34.707 } 00:08:34.707 } 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:34.707 16:50:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.707 16:50:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.707 16:50:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:34.707 16:50:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.967 16:50:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:34.967 16:50:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:34.967 16:50:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:34.967 request: 00:08:34.967 { 00:08:34.967 "method": "env_dpdk_get_mem_stats", 00:08:34.967 "req_id": 1 00:08:34.967 } 00:08:34.967 Got JSON-RPC error response 00:08:34.967 response: 00:08:34.967 { 00:08:34.967 "code": -32601, 00:08:34.967 "message": "Method not found" 00:08:34.967 } 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.967 16:50:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71421 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71421 ']' 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71421 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71421 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.967 16:50:04 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:35.227 killing process with pid 71421 00:08:35.227 16:50:04 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71421' 00:08:35.227 16:50:04 app_cmdline -- common/autotest_common.sh@969 -- # kill 71421 00:08:35.227 16:50:04 app_cmdline -- common/autotest_common.sh@974 -- # wait 71421 00:08:35.487 00:08:35.487 real 0m1.972s 00:08:35.487 user 0m2.168s 00:08:35.487 sys 0m0.554s 00:08:35.487 16:50:04 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.487 16:50:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.487 ************************************ 00:08:35.487 END TEST app_cmdline 00:08:35.487 ************************************ 00:08:35.487 16:50:04 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:35.487 16:50:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.487 16:50:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.487 16:50:04 -- common/autotest_common.sh@10 -- # set +x 00:08:35.487 ************************************ 00:08:35.487 START TEST version 00:08:35.487 ************************************ 00:08:35.487 16:50:04 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:35.747 * Looking for test storage... 00:08:35.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1681 -- # lcov --version 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:35.747 16:50:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.747 16:50:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.747 16:50:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.747 16:50:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.747 16:50:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.747 16:50:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.747 16:50:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.747 16:50:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.747 16:50:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.747 16:50:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.747 16:50:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.747 16:50:05 version -- scripts/common.sh@344 -- # case "$op" in 00:08:35.747 16:50:05 version -- scripts/common.sh@345 -- # : 1 00:08:35.747 16:50:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.747 16:50:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.747 16:50:05 version -- scripts/common.sh@365 -- # decimal 1 00:08:35.747 16:50:05 version -- scripts/common.sh@353 -- # local d=1 00:08:35.747 16:50:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.747 16:50:05 version -- scripts/common.sh@355 -- # echo 1 00:08:35.747 16:50:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.747 16:50:05 version -- scripts/common.sh@366 -- # decimal 2 00:08:35.747 16:50:05 version -- scripts/common.sh@353 -- # local d=2 00:08:35.747 16:50:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.747 16:50:05 version -- scripts/common.sh@355 -- # echo 2 00:08:35.747 16:50:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.747 16:50:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.747 16:50:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.747 16:50:05 version -- scripts/common.sh@368 -- # return 0 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.747 --rc genhtml_branch_coverage=1 00:08:35.747 --rc genhtml_function_coverage=1 00:08:35.747 --rc genhtml_legend=1 00:08:35.747 --rc geninfo_all_blocks=1 00:08:35.747 --rc geninfo_unexecuted_blocks=1 00:08:35.747 00:08:35.747 ' 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.747 --rc genhtml_branch_coverage=1 00:08:35.747 --rc genhtml_function_coverage=1 00:08:35.747 --rc genhtml_legend=1 00:08:35.747 --rc geninfo_all_blocks=1 00:08:35.747 --rc geninfo_unexecuted_blocks=1 00:08:35.747 00:08:35.747 ' 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.747 --rc genhtml_branch_coverage=1 00:08:35.747 --rc genhtml_function_coverage=1 00:08:35.747 --rc genhtml_legend=1 00:08:35.747 --rc geninfo_all_blocks=1 00:08:35.747 --rc geninfo_unexecuted_blocks=1 00:08:35.747 00:08:35.747 ' 00:08:35.747 16:50:05 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.747 --rc genhtml_branch_coverage=1 00:08:35.747 --rc genhtml_function_coverage=1 00:08:35.747 --rc genhtml_legend=1 00:08:35.747 --rc geninfo_all_blocks=1 00:08:35.747 --rc geninfo_unexecuted_blocks=1 00:08:35.747 00:08:35.747 ' 00:08:35.747 16:50:05 version -- app/version.sh@17 -- # get_header_version major 00:08:35.747 16:50:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # cut -f2 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.747 16:50:05 version -- app/version.sh@17 -- # major=24 00:08:35.747 16:50:05 version -- app/version.sh@18 -- # get_header_version minor 00:08:35.747 16:50:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # cut -f2 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.747 16:50:05 version -- app/version.sh@18 -- # minor=9 00:08:35.747 16:50:05 version -- app/version.sh@19 -- # get_header_version patch 00:08:35.747 16:50:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # cut -f2 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.747 16:50:05 version -- app/version.sh@19 -- # patch=1 00:08:35.747 16:50:05 version -- app/version.sh@20 -- # get_header_version suffix 00:08:35.747 16:50:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # cut -f2 00:08:35.747 16:50:05 version -- app/version.sh@14 -- # tr -d '"' 00:08:35.747 16:50:05 version -- app/version.sh@20 -- # suffix=-pre 00:08:35.747 16:50:05 version -- app/version.sh@22 -- # version=24.9 00:08:35.747 16:50:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:35.748 16:50:05 version -- app/version.sh@25 -- # version=24.9.1 00:08:35.748 16:50:05 version -- app/version.sh@28 -- # version=24.9.1rc0 00:08:35.748 16:50:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:35.748 16:50:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:35.748 16:50:05 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:08:35.748 16:50:05 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:08:35.748 ************************************ 00:08:35.748 END TEST version 00:08:35.748 ************************************ 00:08:35.748 00:08:35.748 real 0m0.314s 00:08:35.748 user 0m0.179s 00:08:35.748 sys 0m0.194s 00:08:35.748 16:50:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.748 16:50:05 version -- common/autotest_common.sh@10 -- # set +x 00:08:36.008 16:50:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:36.008 16:50:05 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:36.008 16:50:05 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:36.008 16:50:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.008 16:50:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.008 16:50:05 -- common/autotest_common.sh@10 -- # set +x 00:08:36.008 ************************************ 00:08:36.008 START TEST bdev_raid 00:08:36.008 ************************************ 00:08:36.008 16:50:05 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:36.008 * Looking for test storage... 00:08:36.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:36.008 16:50:05 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:36.008 16:50:05 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:08:36.008 16:50:05 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:36.008 16:50:05 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:36.008 16:50:05 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.269 16:50:05 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.269 --rc genhtml_branch_coverage=1 00:08:36.269 --rc genhtml_function_coverage=1 00:08:36.269 --rc genhtml_legend=1 00:08:36.269 --rc geninfo_all_blocks=1 00:08:36.269 --rc geninfo_unexecuted_blocks=1 00:08:36.269 00:08:36.269 ' 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.269 --rc genhtml_branch_coverage=1 00:08:36.269 --rc genhtml_function_coverage=1 00:08:36.269 --rc genhtml_legend=1 00:08:36.269 --rc geninfo_all_blocks=1 00:08:36.269 --rc geninfo_unexecuted_blocks=1 00:08:36.269 00:08:36.269 ' 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.269 --rc genhtml_branch_coverage=1 00:08:36.269 --rc genhtml_function_coverage=1 00:08:36.269 --rc genhtml_legend=1 00:08:36.269 --rc geninfo_all_blocks=1 00:08:36.269 --rc geninfo_unexecuted_blocks=1 00:08:36.269 00:08:36.269 ' 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.269 --rc genhtml_branch_coverage=1 00:08:36.269 --rc genhtml_function_coverage=1 00:08:36.269 --rc genhtml_legend=1 00:08:36.269 --rc geninfo_all_blocks=1 00:08:36.269 --rc geninfo_unexecuted_blocks=1 00:08:36.269 00:08:36.269 ' 00:08:36.269 16:50:05 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:36.269 16:50:05 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:36.269 16:50:05 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:36.269 16:50:05 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:36.269 16:50:05 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:36.269 16:50:05 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:36.269 16:50:05 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.269 16:50:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.269 ************************************ 00:08:36.269 START TEST raid1_resize_data_offset_test 00:08:36.269 ************************************ 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71587 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71587' 00:08:36.269 Process raid pid: 71587 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71587 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71587 ']' 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.269 16:50:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.269 [2024-11-08 16:50:05.641866] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:36.269 [2024-11-08 16:50:05.642130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.529 [2024-11-08 16:50:05.805115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.529 [2024-11-08 16:50:05.851102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.529 [2024-11-08 16:50:05.893749] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.529 [2024-11-08 16:50:05.893864] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 malloc0 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 malloc1 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 null0 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 [2024-11-08 16:50:06.537692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:37.099 [2024-11-08 16:50:06.539553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:37.099 [2024-11-08 16:50:06.539598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:37.099 [2024-11-08 16:50:06.539755] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:37.099 [2024-11-08 16:50:06.539842] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:37.099 [2024-11-08 16:50:06.540109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:08:37.099 [2024-11-08 16:50:06.540267] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:37.099 [2024-11-08 16:50:06.540286] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:37.099 [2024-11-08 16:50:06.540419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 [2024-11-08 16:50:06.601562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.099 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.360 malloc2 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.360 [2024-11-08 16:50:06.725105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:37.360 [2024-11-08 16:50:06.729389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.360 [2024-11-08 16:50:06.731278] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71587 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71587 ']' 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71587 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71587 00:08:37.360 killing process with pid 71587 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71587' 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71587 00:08:37.360 [2024-11-08 16:50:06.826966] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.360 16:50:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71587 00:08:37.360 [2024-11-08 16:50:06.827119] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:37.360 [2024-11-08 16:50:06.827174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.360 [2024-11-08 16:50:06.827190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:37.360 [2024-11-08 16:50:06.832910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.360 [2024-11-08 16:50:06.833194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.360 [2024-11-08 16:50:06.833210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:37.620 [2024-11-08 16:50:07.044098] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.880 16:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:37.880 00:08:37.880 real 0m1.731s 00:08:37.880 user 0m1.726s 00:08:37.880 sys 0m0.454s 00:08:37.880 16:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.880 16:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.880 ************************************ 00:08:37.880 END TEST raid1_resize_data_offset_test 00:08:37.880 ************************************ 00:08:37.880 16:50:07 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:37.880 16:50:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.880 16:50:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.880 16:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.880 ************************************ 00:08:37.880 START TEST raid0_resize_superblock_test 00:08:37.880 ************************************ 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:37.880 Process raid pid: 71638 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71638 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71638' 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71638 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71638 ']' 00:08:37.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.880 16:50:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.140 [2024-11-08 16:50:07.439024] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.140 [2024-11-08 16:50:07.439337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.140 [2024-11-08 16:50:07.601475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.140 [2024-11-08 16:50:07.646512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.400 [2024-11-08 16:50:07.689084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.400 [2024-11-08 16:50:07.689215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.969 malloc0 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.969 [2024-11-08 16:50:08.381306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:38.969 [2024-11-08 16:50:08.381376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.969 [2024-11-08 16:50:08.381401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:38.969 [2024-11-08 16:50:08.381411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.969 [2024-11-08 16:50:08.383581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.969 [2024-11-08 16:50:08.383701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:38.969 pt0 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.969 18b43ddb-937b-4ae9-9f76-cc43276eee54 00:08:38.969 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.970 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:38.970 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.970 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 c1180a8b-84d8-4caa-8be2-fd0395dcae04 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 49dcacf2-81c8-45bb-8a2f-88dcf284573d 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 [2024-11-08 16:50:08.521310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c1180a8b-84d8-4caa-8be2-fd0395dcae04 is claimed 00:08:39.229 [2024-11-08 16:50:08.521391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 49dcacf2-81c8-45bb-8a2f-88dcf284573d is claimed 00:08:39.229 [2024-11-08 16:50:08.521490] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:39.229 [2024-11-08 16:50:08.521502] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:39.229 [2024-11-08 16:50:08.521757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:39.229 [2024-11-08 16:50:08.521921] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:39.229 [2024-11-08 16:50:08.521932] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:39.229 [2024-11-08 16:50:08.522057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:39.229 [2024-11-08 16:50:08.637361] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 [2024-11-08 16:50:08.685175] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:39.229 [2024-11-08 16:50:08.685200] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c1180a8b-84d8-4caa-8be2-fd0395dcae04' was resized: old size 131072, new size 204800 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 [2024-11-08 16:50:08.697076] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:39.229 [2024-11-08 16:50:08.697099] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '49dcacf2-81c8-45bb-8a2f-88dcf284573d' was resized: old size 131072, new size 204800 00:08:39.229 [2024-11-08 16:50:08.697126] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.229 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:39.489 [2024-11-08 16:50:08.805009] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.489 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.489 [2024-11-08 16:50:08.852829] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:39.489 [2024-11-08 16:50:08.852906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:39.489 [2024-11-08 16:50:08.852918] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.490 [2024-11-08 16:50:08.852932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:39.490 [2024-11-08 16:50:08.853052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.490 [2024-11-08 16:50:08.853090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.490 [2024-11-08 16:50:08.853101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.490 [2024-11-08 16:50:08.864705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:39.490 [2024-11-08 16:50:08.864765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.490 [2024-11-08 16:50:08.864787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:39.490 [2024-11-08 16:50:08.864800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.490 [2024-11-08 16:50:08.866960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.490 [2024-11-08 16:50:08.866999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:39.490 [2024-11-08 16:50:08.868590] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c1180a8b-84d8-4caa-8be2-fd0395dcae04 00:08:39.490 [2024-11-08 16:50:08.868677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev c1180a8b-84d8-4caa-8be2-fd0395dcae04 is claimed 00:08:39.490 [2024-11-08 16:50:08.868790] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 49dcacf2-81c8-45bb-8a2f-88dcf284573d 00:08:39.490 [2024-11-08 16:50:08.868816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 49dcacf2-81c8-45bb-8a2f-88dcf284573d is claimed 00:08:39.490 [2024-11-08 16:50:08.868905] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 49dcacf2-81c8-45bb-8a2f-88dcf284573d (2) smaller than existing raid bdev Raid (3) 00:08:39.490 [2024-11-08 16:50:08.868936] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c1180a8b-84d8-4caa-8be2-fd0395dcae04: File exists 00:08:39.490 [2024-11-08 16:50:08.868980] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:08:39.490 [2024-11-08 16:50:08.868993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:39.490 [2024-11-08 16:50:08.869247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:39.490 [2024-11-08 16:50:08.869365] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:08:39.490 [2024-11-08 16:50:08.869372] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:08:39.490 [2024-11-08 16:50:08.869523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.490 pt0 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.490 [2024-11-08 16:50:08.893188] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71638 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71638 ']' 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71638 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71638 00:08:39.490 killing process with pid 71638 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71638' 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71638 00:08:39.490 [2024-11-08 16:50:08.970029] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.490 [2024-11-08 16:50:08.970110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.490 16:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71638 00:08:39.490 [2024-11-08 16:50:08.970152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.490 [2024-11-08 16:50:08.970160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:08:39.750 [2024-11-08 16:50:09.133440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.012 ************************************ 00:08:40.012 END TEST raid0_resize_superblock_test 00:08:40.012 ************************************ 00:08:40.012 16:50:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:40.012 00:08:40.012 real 0m2.018s 00:08:40.012 user 0m2.282s 00:08:40.012 sys 0m0.502s 00:08:40.012 16:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.012 16:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 16:50:09 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:40.012 16:50:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:40.012 16:50:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.012 16:50:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 ************************************ 00:08:40.012 START TEST raid1_resize_superblock_test 00:08:40.012 ************************************ 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:40.012 Process raid pid: 71709 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71709 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71709' 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71709 00:08:40.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71709 ']' 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.012 16:50:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 [2024-11-08 16:50:09.520870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:40.012 [2024-11-08 16:50:09.521032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.273 [2024-11-08 16:50:09.682412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.273 [2024-11-08 16:50:09.729847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.273 [2024-11-08 16:50:09.772168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.273 [2024-11-08 16:50:09.772280] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.844 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.844 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:40.844 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:40.844 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.844 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.103 malloc0 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.103 [2024-11-08 16:50:10.474805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:41.103 [2024-11-08 16:50:10.474895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.103 [2024-11-08 16:50:10.474923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.103 [2024-11-08 16:50:10.474935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.103 [2024-11-08 16:50:10.477132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.103 [2024-11-08 16:50:10.477175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:41.103 pt0 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.103 4409e226-19f0-4b4d-99f3-89c5e89ffd34 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.103 53a411a8-50a0-44e1-aeb9-19a7f5ce3fef 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.103 539a2ff9-00a7-4d7e-beb7-4ea3f05da6c8 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:41.103 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.104 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 [2024-11-08 16:50:10.610896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 53a411a8-50a0-44e1-aeb9-19a7f5ce3fef is claimed 00:08:41.104 [2024-11-08 16:50:10.611025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 539a2ff9-00a7-4d7e-beb7-4ea3f05da6c8 is claimed 00:08:41.104 [2024-11-08 16:50:10.611152] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:41.104 [2024-11-08 16:50:10.611166] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:41.104 [2024-11-08 16:50:10.611479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:41.104 [2024-11-08 16:50:10.611674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:41.104 [2024-11-08 16:50:10.611688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:41.104 [2024-11-08 16:50:10.611840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.104 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.104 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:41.104 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.104 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:41.104 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:41.364 [2024-11-08 16:50:10.726924] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.364 [2024-11-08 16:50:10.802729] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:41.364 [2024-11-08 16:50:10.802756] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '53a411a8-50a0-44e1-aeb9-19a7f5ce3fef' was resized: old size 131072, new size 204800 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.364 [2024-11-08 16:50:10.814616] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:41.364 [2024-11-08 16:50:10.814638] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '539a2ff9-00a7-4d7e-beb7-4ea3f05da6c8' was resized: old size 131072, new size 204800 00:08:41.364 [2024-11-08 16:50:10.814682] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.364 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 [2024-11-08 16:50:10.922535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 [2024-11-08 16:50:10.970265] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:41.625 [2024-11-08 16:50:10.970396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:41.625 [2024-11-08 16:50:10.970451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:41.625 [2024-11-08 16:50:10.970652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.625 [2024-11-08 16:50:10.970849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.625 [2024-11-08 16:50:10.970940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.625 [2024-11-08 16:50:10.971008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 [2024-11-08 16:50:10.982174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:41.625 [2024-11-08 16:50:10.982287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.625 [2024-11-08 16:50:10.982326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:41.625 [2024-11-08 16:50:10.982358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.625 [2024-11-08 16:50:10.984516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.625 [2024-11-08 16:50:10.984589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:41.625 [2024-11-08 16:50:10.986144] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 53a411a8-50a0-44e1-aeb9-19a7f5ce3fef 00:08:41.625 [2024-11-08 16:50:10.986276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 53a411a8-50a0-44e1-aeb9-19a7f5ce3fef is claimed 00:08:41.625 [2024-11-08 16:50:10.986410] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 539a2ff9-00a7-4d7e-beb7-4ea3f05da6c8 00:08:41.625 [2024-11-08 16:50:10.986484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 539a2ff9-00a7-4d7e-beb7-4ea3f05da6c8 is claimed 00:08:41.625 [2024-11-08 16:50:10.986688] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 539a2ff9-00a7-4d7e-beb7-4ea3f05da6c8 (2) smaller than existing raid bdev Raid (3) 00:08:41.625 [2024-11-08 16:50:10.986766] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 53a411a8-50a0-44e1-aeb9-19a7f5ce3fef: File exists 00:08:41.625 [2024-11-08 16:50:10.986846] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:08:41.625 [2024-11-08 16:50:10.986887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:41.625 [2024-11-08 16:50:10.987182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:41.625 pt0 00:08:41.625 [2024-11-08 16:50:10.987386] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:08:41.625 [2024-11-08 16:50:10.987398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:08:41.625 [2024-11-08 16:50:10.987524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 [2024-11-08 16:50:11.010411] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.625 16:50:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71709 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71709 ']' 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71709 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71709 00:08:41.625 killing process with pid 71709 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71709' 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71709 00:08:41.625 [2024-11-08 16:50:11.091442] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.625 [2024-11-08 16:50:11.091524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.625 [2024-11-08 16:50:11.091575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.625 [2024-11-08 16:50:11.091584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:08:41.625 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71709 00:08:41.885 [2024-11-08 16:50:11.251583] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.145 ************************************ 00:08:42.145 END TEST raid1_resize_superblock_test 00:08:42.145 ************************************ 00:08:42.145 16:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:42.145 00:08:42.145 real 0m2.059s 00:08:42.145 user 0m2.359s 00:08:42.145 sys 0m0.511s 00:08:42.146 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.146 16:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.146 16:50:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:42.146 16:50:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:42.146 16:50:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:42.146 16:50:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:42.146 16:50:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:42.146 16:50:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:42.146 16:50:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:42.146 16:50:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.146 16:50:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.146 ************************************ 00:08:42.146 START TEST raid_function_test_raid0 00:08:42.146 ************************************ 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71787 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71787' 00:08:42.146 Process raid pid: 71787 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71787 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71787 ']' 00:08:42.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.146 16:50:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 [2024-11-08 16:50:11.676098] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.405 [2024-11-08 16:50:11.676322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.406 [2024-11-08 16:50:11.822313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.406 [2024-11-08 16:50:11.868120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.406 [2024-11-08 16:50:11.909774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.406 [2024-11-08 16:50:11.909898] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.976 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:43.236 Base_1 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:43.236 Base_2 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:43.236 [2024-11-08 16:50:12.555338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:43.236 [2024-11-08 16:50:12.557361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:43.236 [2024-11-08 16:50:12.557426] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:43.236 [2024-11-08 16:50:12.557446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:43.236 [2024-11-08 16:50:12.557723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:43.236 [2024-11-08 16:50:12.557861] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:43.236 [2024-11-08 16:50:12.557871] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:08:43.236 [2024-11-08 16:50:12.558020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:43.236 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:43.496 [2024-11-08 16:50:12.794963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.496 /dev/nbd0 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:08:43.496 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.497 1+0 records in 00:08:43.497 1+0 records out 00:08:43.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411335 s, 10.0 MB/s 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:43.497 16:50:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:43.757 { 00:08:43.757 "nbd_device": "/dev/nbd0", 00:08:43.757 "bdev_name": "raid" 00:08:43.757 } 00:08:43.757 ]' 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:43.757 { 00:08:43.757 "nbd_device": "/dev/nbd0", 00:08:43.757 "bdev_name": "raid" 00:08:43.757 } 00:08:43.757 ]' 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:43.757 4096+0 records in 00:08:43.757 4096+0 records out 00:08:43.757 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.033766 s, 62.1 MB/s 00:08:43.757 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:44.027 4096+0 records in 00:08:44.027 4096+0 records out 00:08:44.027 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.18312 s, 11.5 MB/s 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:44.027 128+0 records in 00:08:44.027 128+0 records out 00:08:44.027 65536 bytes (66 kB, 64 KiB) copied, 0.00128788 s, 50.9 MB/s 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:44.027 2035+0 records in 00:08:44.027 2035+0 records out 00:08:44.027 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0106381 s, 97.9 MB/s 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:44.027 456+0 records in 00:08:44.027 456+0 records out 00:08:44.027 233472 bytes (233 kB, 228 KiB) copied, 0.00378206 s, 61.7 MB/s 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.027 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:44.296 [2024-11-08 16:50:13.693279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:44.296 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:44.556 16:50:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71787 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71787 ']' 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71787 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71787 00:08:44.556 killing process with pid 71787 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71787' 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71787 00:08:44.556 [2024-11-08 16:50:14.048507] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.556 [2024-11-08 16:50:14.048634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.556 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71787 00:08:44.556 [2024-11-08 16:50:14.048720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.556 [2024-11-08 16:50:14.048733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:08:44.556 [2024-11-08 16:50:14.071306] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.815 ************************************ 00:08:44.815 END TEST raid_function_test_raid0 00:08:44.815 ************************************ 00:08:44.815 16:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:44.815 00:08:44.815 real 0m2.720s 00:08:44.815 user 0m3.401s 00:08:44.815 sys 0m0.902s 00:08:44.815 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.815 16:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:45.075 16:50:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:45.075 16:50:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.075 16:50:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.075 16:50:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.075 ************************************ 00:08:45.075 START TEST raid_function_test_concat 00:08:45.075 ************************************ 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:45.075 Process raid pid: 71901 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71901 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71901' 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71901 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71901 ']' 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.075 16:50:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.075 [2024-11-08 16:50:14.470870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.075 [2024-11-08 16:50:14.471022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.335 [2024-11-08 16:50:14.632912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.335 [2024-11-08 16:50:14.680121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.335 [2024-11-08 16:50:14.722260] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.335 [2024-11-08 16:50:14.722299] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.906 Base_1 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.906 Base_2 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.906 [2024-11-08 16:50:15.349544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:45.906 [2024-11-08 16:50:15.351500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:45.906 [2024-11-08 16:50:15.351568] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:45.906 [2024-11-08 16:50:15.351580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:45.906 [2024-11-08 16:50:15.351859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:45.906 [2024-11-08 16:50:15.351989] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:45.906 [2024-11-08 16:50:15.352004] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:08:45.906 [2024-11-08 16:50:15.352169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:45.906 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:46.166 [2024-11-08 16:50:15.597146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:46.166 /dev/nbd0 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.166 1+0 records in 00:08:46.166 1+0 records out 00:08:46.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060395 s, 6.8 MB/s 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:46.166 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:46.426 { 00:08:46.426 "nbd_device": "/dev/nbd0", 00:08:46.426 "bdev_name": "raid" 00:08:46.426 } 00:08:46.426 ]' 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:46.426 { 00:08:46.426 "nbd_device": "/dev/nbd0", 00:08:46.426 "bdev_name": "raid" 00:08:46.426 } 00:08:46.426 ]' 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:46.426 16:50:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:46.427 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:46.686 4096+0 records in 00:08:46.686 4096+0 records out 00:08:46.686 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0327249 s, 64.1 MB/s 00:08:46.686 16:50:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:46.686 4096+0 records in 00:08:46.686 4096+0 records out 00:08:46.686 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.210206 s, 10.0 MB/s 00:08:46.686 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:46.686 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:46.946 128+0 records in 00:08:46.946 128+0 records out 00:08:46.946 65536 bytes (66 kB, 64 KiB) copied, 0.00121485 s, 53.9 MB/s 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:46.946 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:46.946 2035+0 records in 00:08:46.946 2035+0 records out 00:08:46.946 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0146197 s, 71.3 MB/s 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:46.947 456+0 records in 00:08:46.947 456+0 records out 00:08:46.947 233472 bytes (233 kB, 228 KiB) copied, 0.00371291 s, 62.9 MB/s 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.947 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:47.207 [2024-11-08 16:50:16.535695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:47.207 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71901 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71901 ']' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71901 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71901 00:08:47.466 killing process with pid 71901 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71901' 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71901 00:08:47.466 [2024-11-08 16:50:16.853105] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.466 [2024-11-08 16:50:16.853209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.466 16:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71901 00:08:47.466 [2024-11-08 16:50:16.853277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.466 [2024-11-08 16:50:16.853293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:08:47.466 [2024-11-08 16:50:16.876123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.727 16:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:47.727 00:08:47.727 real 0m2.737s 00:08:47.727 user 0m3.365s 00:08:47.727 sys 0m0.934s 00:08:47.727 ************************************ 00:08:47.727 END TEST raid_function_test_concat 00:08:47.727 ************************************ 00:08:47.727 16:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.727 16:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:47.727 16:50:17 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:47.727 16:50:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.727 16:50:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.727 16:50:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.727 ************************************ 00:08:47.727 START TEST raid0_resize_test 00:08:47.727 ************************************ 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72017 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72017' 00:08:47.727 Process raid pid: 72017 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72017 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72017 ']' 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.727 16:50:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.987 [2024-11-08 16:50:17.273272] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:47.987 [2024-11-08 16:50:17.273951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.987 [2024-11-08 16:50:17.436371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.987 [2024-11-08 16:50:17.482616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.246 [2024-11-08 16:50:17.524878] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.246 [2024-11-08 16:50:17.524991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 Base_1 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 Base_2 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 [2024-11-08 16:50:18.126573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:48.816 [2024-11-08 16:50:18.128694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:48.816 [2024-11-08 16:50:18.128753] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:48.816 [2024-11-08 16:50:18.128764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:48.816 [2024-11-08 16:50:18.129087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:08:48.816 [2024-11-08 16:50:18.129194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:48.816 [2024-11-08 16:50:18.129210] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:48.816 [2024-11-08 16:50:18.129355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 [2024-11-08 16:50:18.138511] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:48.816 [2024-11-08 16:50:18.138539] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:48.816 true 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 [2024-11-08 16:50:18.154693] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 [2024-11-08 16:50:18.198412] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:48.816 [2024-11-08 16:50:18.198476] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:48.816 [2024-11-08 16:50:18.198551] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:48.816 true 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 [2024-11-08 16:50:18.214530] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72017 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72017 ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72017 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72017 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72017' 00:08:48.816 killing process with pid 72017 00:08:48.816 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72017 00:08:48.816 [2024-11-08 16:50:18.284060] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.816 [2024-11-08 16:50:18.284184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.816 [2024-11-08 16:50:18.284257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72017 00:08:48.816 ee all in destruct 00:08:48.816 [2024-11-08 16:50:18.284318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:48.816 [2024-11-08 16:50:18.285866] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.077 16:50:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:49.077 00:08:49.077 real 0m1.341s 00:08:49.077 user 0m1.475s 00:08:49.077 sys 0m0.318s 00:08:49.077 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.077 16:50:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.077 ************************************ 00:08:49.077 END TEST raid0_resize_test 00:08:49.077 ************************************ 00:08:49.077 16:50:18 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:49.077 16:50:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.077 16:50:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.077 16:50:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.077 ************************************ 00:08:49.077 START TEST raid1_resize_test 00:08:49.077 ************************************ 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:49.077 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72068 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:49.337 Process raid pid: 72068 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72068' 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72068 00:08:49.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72068 ']' 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.337 16:50:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.337 [2024-11-08 16:50:18.683740] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.337 [2024-11-08 16:50:18.683968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.337 [2024-11-08 16:50:18.843966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.597 [2024-11-08 16:50:18.890014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.597 [2024-11-08 16:50:18.932593] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.597 [2024-11-08 16:50:18.932744] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 Base_1 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 Base_2 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-08 16:50:19.582017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:50.167 [2024-11-08 16:50:19.583836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:50.167 [2024-11-08 16:50:19.583894] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:50.167 [2024-11-08 16:50:19.583906] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:50.167 [2024-11-08 16:50:19.584151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:08:50.167 [2024-11-08 16:50:19.584261] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:50.167 [2024-11-08 16:50:19.584270] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:50.167 [2024-11-08 16:50:19.584378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-08 16:50:19.593957] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:50.167 [2024-11-08 16:50:19.594023] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:50.167 true 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-08 16:50:19.610117] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-08 16:50:19.653857] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:50.167 [2024-11-08 16:50:19.653921] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:50.167 [2024-11-08 16:50:19.653977] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:50.167 true 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-08 16:50:19.669989] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72068 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72068 ']' 00:08:50.167 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72068 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72068 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.428 killing process with pid 72068 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72068' 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72068 00:08:50.428 [2024-11-08 16:50:19.734927] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.428 [2024-11-08 16:50:19.735066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.428 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72068 00:08:50.428 [2024-11-08 16:50:19.735485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.428 [2024-11-08 16:50:19.735499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:50.428 [2024-11-08 16:50:19.736627] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.688 16:50:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:50.688 00:08:50.688 real 0m1.373s 00:08:50.688 user 0m1.531s 00:08:50.688 sys 0m0.333s 00:08:50.688 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.688 16:50:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.688 ************************************ 00:08:50.688 END TEST raid1_resize_test 00:08:50.688 ************************************ 00:08:50.688 16:50:20 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:50.688 16:50:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:50.688 16:50:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:50.688 16:50:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:50.688 16:50:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.688 16:50:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.688 ************************************ 00:08:50.688 START TEST raid_state_function_test 00:08:50.688 ************************************ 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:50.688 Process raid pid: 72114 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72114 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72114' 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72114 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72114 ']' 00:08:50.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.688 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.688 [2024-11-08 16:50:20.138040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:50.688 [2024-11-08 16:50:20.138254] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.948 [2024-11-08 16:50:20.299955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.948 [2024-11-08 16:50:20.345318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.948 [2024-11-08 16:50:20.387791] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.948 [2024-11-08 16:50:20.387843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.546 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.546 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.546 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:51.546 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.546 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 [2024-11-08 16:50:20.977038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.547 [2024-11-08 16:50:20.977170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.547 [2024-11-08 16:50:20.977202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.547 [2024-11-08 16:50:20.977225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.547 16:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.547 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.547 "name": "Existed_Raid", 00:08:51.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.547 "strip_size_kb": 64, 00:08:51.547 "state": "configuring", 00:08:51.547 "raid_level": "raid0", 00:08:51.547 "superblock": false, 00:08:51.547 "num_base_bdevs": 2, 00:08:51.547 "num_base_bdevs_discovered": 0, 00:08:51.547 "num_base_bdevs_operational": 2, 00:08:51.547 "base_bdevs_list": [ 00:08:51.547 { 00:08:51.547 "name": "BaseBdev1", 00:08:51.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.547 "is_configured": false, 00:08:51.547 "data_offset": 0, 00:08:51.547 "data_size": 0 00:08:51.547 }, 00:08:51.547 { 00:08:51.547 "name": "BaseBdev2", 00:08:51.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.547 "is_configured": false, 00:08:51.547 "data_offset": 0, 00:08:51.547 "data_size": 0 00:08:51.547 } 00:08:51.547 ] 00:08:51.547 }' 00:08:51.547 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.547 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [2024-11-08 16:50:21.448138] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.117 [2024-11-08 16:50:21.448250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [2024-11-08 16:50:21.460149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.117 [2024-11-08 16:50:21.460242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.117 [2024-11-08 16:50:21.460274] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.117 [2024-11-08 16:50:21.460297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [2024-11-08 16:50:21.481084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.117 BaseBdev1 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [ 00:08:52.117 { 00:08:52.117 "name": "BaseBdev1", 00:08:52.117 "aliases": [ 00:08:52.117 "9b0fa0a9-abf3-4c85-86d1-19a8330c9daf" 00:08:52.117 ], 00:08:52.117 "product_name": "Malloc disk", 00:08:52.117 "block_size": 512, 00:08:52.117 "num_blocks": 65536, 00:08:52.117 "uuid": "9b0fa0a9-abf3-4c85-86d1-19a8330c9daf", 00:08:52.117 "assigned_rate_limits": { 00:08:52.117 "rw_ios_per_sec": 0, 00:08:52.117 "rw_mbytes_per_sec": 0, 00:08:52.117 "r_mbytes_per_sec": 0, 00:08:52.117 "w_mbytes_per_sec": 0 00:08:52.117 }, 00:08:52.117 "claimed": true, 00:08:52.117 "claim_type": "exclusive_write", 00:08:52.117 "zoned": false, 00:08:52.117 "supported_io_types": { 00:08:52.117 "read": true, 00:08:52.117 "write": true, 00:08:52.117 "unmap": true, 00:08:52.117 "flush": true, 00:08:52.117 "reset": true, 00:08:52.117 "nvme_admin": false, 00:08:52.117 "nvme_io": false, 00:08:52.117 "nvme_io_md": false, 00:08:52.117 "write_zeroes": true, 00:08:52.117 "zcopy": true, 00:08:52.117 "get_zone_info": false, 00:08:52.117 "zone_management": false, 00:08:52.117 "zone_append": false, 00:08:52.117 "compare": false, 00:08:52.117 "compare_and_write": false, 00:08:52.117 "abort": true, 00:08:52.117 "seek_hole": false, 00:08:52.117 "seek_data": false, 00:08:52.117 "copy": true, 00:08:52.117 "nvme_iov_md": false 00:08:52.117 }, 00:08:52.117 "memory_domains": [ 00:08:52.117 { 00:08:52.117 "dma_device_id": "system", 00:08:52.117 "dma_device_type": 1 00:08:52.117 }, 00:08:52.117 { 00:08:52.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.117 "dma_device_type": 2 00:08:52.117 } 00:08:52.117 ], 00:08:52.117 "driver_specific": {} 00:08:52.117 } 00:08:52.117 ] 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.118 "name": "Existed_Raid", 00:08:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.118 "strip_size_kb": 64, 00:08:52.118 "state": "configuring", 00:08:52.118 "raid_level": "raid0", 00:08:52.118 "superblock": false, 00:08:52.118 "num_base_bdevs": 2, 00:08:52.118 "num_base_bdevs_discovered": 1, 00:08:52.118 "num_base_bdevs_operational": 2, 00:08:52.118 "base_bdevs_list": [ 00:08:52.118 { 00:08:52.118 "name": "BaseBdev1", 00:08:52.118 "uuid": "9b0fa0a9-abf3-4c85-86d1-19a8330c9daf", 00:08:52.118 "is_configured": true, 00:08:52.118 "data_offset": 0, 00:08:52.118 "data_size": 65536 00:08:52.118 }, 00:08:52.118 { 00:08:52.118 "name": "BaseBdev2", 00:08:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.118 "is_configured": false, 00:08:52.118 "data_offset": 0, 00:08:52.118 "data_size": 0 00:08:52.118 } 00:08:52.118 ] 00:08:52.118 }' 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.118 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.688 [2024-11-08 16:50:21.964332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.688 [2024-11-08 16:50:21.964437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.688 [2024-11-08 16:50:21.976341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.688 [2024-11-08 16:50:21.978288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.688 [2024-11-08 16:50:21.978380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.688 16:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.688 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.688 "name": "Existed_Raid", 00:08:52.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.688 "strip_size_kb": 64, 00:08:52.688 "state": "configuring", 00:08:52.688 "raid_level": "raid0", 00:08:52.688 "superblock": false, 00:08:52.688 "num_base_bdevs": 2, 00:08:52.688 "num_base_bdevs_discovered": 1, 00:08:52.688 "num_base_bdevs_operational": 2, 00:08:52.688 "base_bdevs_list": [ 00:08:52.688 { 00:08:52.688 "name": "BaseBdev1", 00:08:52.688 "uuid": "9b0fa0a9-abf3-4c85-86d1-19a8330c9daf", 00:08:52.688 "is_configured": true, 00:08:52.688 "data_offset": 0, 00:08:52.688 "data_size": 65536 00:08:52.688 }, 00:08:52.688 { 00:08:52.688 "name": "BaseBdev2", 00:08:52.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.688 "is_configured": false, 00:08:52.688 "data_offset": 0, 00:08:52.688 "data_size": 0 00:08:52.688 } 00:08:52.688 ] 00:08:52.688 }' 00:08:52.688 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.688 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.949 [2024-11-08 16:50:22.435226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.949 [2024-11-08 16:50:22.435379] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:52.949 [2024-11-08 16:50:22.435413] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:52.949 [2024-11-08 16:50:22.435786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:52.949 [2024-11-08 16:50:22.436004] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:52.949 [2024-11-08 16:50:22.436063] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:08:52.949 id_bdev 0x617000006980 00:08:52.949 [2024-11-08 16:50:22.436365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.949 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.949 [ 00:08:52.949 { 00:08:52.949 "name": "BaseBdev2", 00:08:52.949 "aliases": [ 00:08:52.949 "3af1ac41-430f-4ba9-89b3-7849d6314a99" 00:08:52.949 ], 00:08:52.949 "product_name": "Malloc disk", 00:08:52.949 "block_size": 512, 00:08:52.949 "num_blocks": 65536, 00:08:52.949 "uuid": "3af1ac41-430f-4ba9-89b3-7849d6314a99", 00:08:52.949 "assigned_rate_limits": { 00:08:52.949 "rw_ios_per_sec": 0, 00:08:52.949 "rw_mbytes_per_sec": 0, 00:08:52.949 "r_mbytes_per_sec": 0, 00:08:52.949 "w_mbytes_per_sec": 0 00:08:52.949 }, 00:08:52.949 "claimed": true, 00:08:52.949 "claim_type": "exclusive_write", 00:08:52.949 "zoned": false, 00:08:52.949 "supported_io_types": { 00:08:52.949 "read": true, 00:08:52.949 "write": true, 00:08:52.949 "unmap": true, 00:08:52.949 "flush": true, 00:08:52.949 "reset": true, 00:08:52.949 "nvme_admin": false, 00:08:52.949 "nvme_io": false, 00:08:52.949 "nvme_io_md": false, 00:08:52.949 "write_zeroes": true, 00:08:52.949 "zcopy": true, 00:08:52.949 "get_zone_info": false, 00:08:52.949 "zone_management": false, 00:08:52.949 "zone_append": false, 00:08:52.949 "compare": false, 00:08:52.949 "compare_and_write": false, 00:08:52.949 "abort": true, 00:08:52.949 "seek_hole": false, 00:08:52.949 "seek_data": false, 00:08:52.949 "copy": true, 00:08:52.949 "nvme_iov_md": false 00:08:52.949 }, 00:08:52.949 "memory_domains": [ 00:08:52.949 { 00:08:52.949 "dma_device_id": "system", 00:08:52.949 "dma_device_type": 1 00:08:52.949 }, 00:08:52.949 { 00:08:52.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.949 "dma_device_type": 2 00:08:52.949 } 00:08:52.949 ], 00:08:52.949 "driver_specific": {} 00:08:53.209 } 00:08:53.209 ] 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.209 "name": "Existed_Raid", 00:08:53.209 "uuid": "50b97cfb-577c-4901-93da-ed7647e8a2d3", 00:08:53.209 "strip_size_kb": 64, 00:08:53.209 "state": "online", 00:08:53.209 "raid_level": "raid0", 00:08:53.209 "superblock": false, 00:08:53.209 "num_base_bdevs": 2, 00:08:53.209 "num_base_bdevs_discovered": 2, 00:08:53.209 "num_base_bdevs_operational": 2, 00:08:53.209 "base_bdevs_list": [ 00:08:53.209 { 00:08:53.209 "name": "BaseBdev1", 00:08:53.209 "uuid": "9b0fa0a9-abf3-4c85-86d1-19a8330c9daf", 00:08:53.209 "is_configured": true, 00:08:53.209 "data_offset": 0, 00:08:53.209 "data_size": 65536 00:08:53.209 }, 00:08:53.209 { 00:08:53.209 "name": "BaseBdev2", 00:08:53.209 "uuid": "3af1ac41-430f-4ba9-89b3-7849d6314a99", 00:08:53.209 "is_configured": true, 00:08:53.209 "data_offset": 0, 00:08:53.209 "data_size": 65536 00:08:53.209 } 00:08:53.209 ] 00:08:53.209 }' 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.209 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.468 [2024-11-08 16:50:22.850844] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.468 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.468 "name": "Existed_Raid", 00:08:53.468 "aliases": [ 00:08:53.468 "50b97cfb-577c-4901-93da-ed7647e8a2d3" 00:08:53.468 ], 00:08:53.468 "product_name": "Raid Volume", 00:08:53.468 "block_size": 512, 00:08:53.468 "num_blocks": 131072, 00:08:53.468 "uuid": "50b97cfb-577c-4901-93da-ed7647e8a2d3", 00:08:53.468 "assigned_rate_limits": { 00:08:53.468 "rw_ios_per_sec": 0, 00:08:53.468 "rw_mbytes_per_sec": 0, 00:08:53.468 "r_mbytes_per_sec": 0, 00:08:53.468 "w_mbytes_per_sec": 0 00:08:53.468 }, 00:08:53.468 "claimed": false, 00:08:53.468 "zoned": false, 00:08:53.468 "supported_io_types": { 00:08:53.468 "read": true, 00:08:53.468 "write": true, 00:08:53.468 "unmap": true, 00:08:53.468 "flush": true, 00:08:53.468 "reset": true, 00:08:53.468 "nvme_admin": false, 00:08:53.468 "nvme_io": false, 00:08:53.468 "nvme_io_md": false, 00:08:53.468 "write_zeroes": true, 00:08:53.468 "zcopy": false, 00:08:53.468 "get_zone_info": false, 00:08:53.468 "zone_management": false, 00:08:53.468 "zone_append": false, 00:08:53.468 "compare": false, 00:08:53.468 "compare_and_write": false, 00:08:53.468 "abort": false, 00:08:53.468 "seek_hole": false, 00:08:53.468 "seek_data": false, 00:08:53.468 "copy": false, 00:08:53.468 "nvme_iov_md": false 00:08:53.468 }, 00:08:53.468 "memory_domains": [ 00:08:53.468 { 00:08:53.468 "dma_device_id": "system", 00:08:53.468 "dma_device_type": 1 00:08:53.468 }, 00:08:53.468 { 00:08:53.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.468 "dma_device_type": 2 00:08:53.468 }, 00:08:53.468 { 00:08:53.468 "dma_device_id": "system", 00:08:53.468 "dma_device_type": 1 00:08:53.468 }, 00:08:53.468 { 00:08:53.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.468 "dma_device_type": 2 00:08:53.468 } 00:08:53.468 ], 00:08:53.468 "driver_specific": { 00:08:53.468 "raid": { 00:08:53.468 "uuid": "50b97cfb-577c-4901-93da-ed7647e8a2d3", 00:08:53.468 "strip_size_kb": 64, 00:08:53.468 "state": "online", 00:08:53.468 "raid_level": "raid0", 00:08:53.468 "superblock": false, 00:08:53.468 "num_base_bdevs": 2, 00:08:53.468 "num_base_bdevs_discovered": 2, 00:08:53.468 "num_base_bdevs_operational": 2, 00:08:53.468 "base_bdevs_list": [ 00:08:53.468 { 00:08:53.468 "name": "BaseBdev1", 00:08:53.468 "uuid": "9b0fa0a9-abf3-4c85-86d1-19a8330c9daf", 00:08:53.468 "is_configured": true, 00:08:53.468 "data_offset": 0, 00:08:53.468 "data_size": 65536 00:08:53.468 }, 00:08:53.468 { 00:08:53.468 "name": "BaseBdev2", 00:08:53.468 "uuid": "3af1ac41-430f-4ba9-89b3-7849d6314a99", 00:08:53.468 "is_configured": true, 00:08:53.468 "data_offset": 0, 00:08:53.468 "data_size": 65536 00:08:53.469 } 00:08:53.469 ] 00:08:53.469 } 00:08:53.469 } 00:08:53.469 }' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:53.469 BaseBdev2' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.469 16:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.727 [2024-11-08 16:50:23.046223] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.727 [2024-11-08 16:50:23.046303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.727 [2024-11-08 16:50:23.046398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.727 "name": "Existed_Raid", 00:08:53.727 "uuid": "50b97cfb-577c-4901-93da-ed7647e8a2d3", 00:08:53.727 "strip_size_kb": 64, 00:08:53.727 "state": "offline", 00:08:53.727 "raid_level": "raid0", 00:08:53.727 "superblock": false, 00:08:53.727 "num_base_bdevs": 2, 00:08:53.727 "num_base_bdevs_discovered": 1, 00:08:53.727 "num_base_bdevs_operational": 1, 00:08:53.727 "base_bdevs_list": [ 00:08:53.727 { 00:08:53.727 "name": null, 00:08:53.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.727 "is_configured": false, 00:08:53.727 "data_offset": 0, 00:08:53.727 "data_size": 65536 00:08:53.727 }, 00:08:53.727 { 00:08:53.727 "name": "BaseBdev2", 00:08:53.727 "uuid": "3af1ac41-430f-4ba9-89b3-7849d6314a99", 00:08:53.727 "is_configured": true, 00:08:53.727 "data_offset": 0, 00:08:53.727 "data_size": 65536 00:08:53.727 } 00:08:53.727 ] 00:08:53.727 }' 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.727 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.986 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.986 [2024-11-08 16:50:23.512836] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.986 [2024-11-08 16:50:23.512958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72114 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72114 ']' 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72114 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72114 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72114' 00:08:54.246 killing process with pid 72114 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72114 00:08:54.246 [2024-11-08 16:50:23.622462] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.246 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72114 00:08:54.246 [2024-11-08 16:50:23.623489] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.506 ************************************ 00:08:54.506 END TEST raid_state_function_test 00:08:54.506 ************************************ 00:08:54.506 00:08:54.506 real 0m3.821s 00:08:54.506 user 0m5.966s 00:08:54.506 sys 0m0.747s 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.506 16:50:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:54.506 16:50:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:54.506 16:50:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.506 16:50:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.506 ************************************ 00:08:54.506 START TEST raid_state_function_test_sb 00:08:54.506 ************************************ 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72356 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72356' 00:08:54.506 Process raid pid: 72356 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72356 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72356 ']' 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.506 16:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.766 [2024-11-08 16:50:24.035383] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:54.766 [2024-11-08 16:50:24.035626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.766 [2024-11-08 16:50:24.198293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.766 [2024-11-08 16:50:24.242835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.766 [2024-11-08 16:50:24.285212] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.766 [2024-11-08 16:50:24.285247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.704 [2024-11-08 16:50:24.874946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.704 [2024-11-08 16:50:24.875069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.704 [2024-11-08 16:50:24.875111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.704 [2024-11-08 16:50:24.875135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.704 "name": "Existed_Raid", 00:08:55.704 "uuid": "7fb8e0e2-7a7d-47c8-b843-2975d97b90c3", 00:08:55.704 "strip_size_kb": 64, 00:08:55.704 "state": "configuring", 00:08:55.704 "raid_level": "raid0", 00:08:55.704 "superblock": true, 00:08:55.704 "num_base_bdevs": 2, 00:08:55.704 "num_base_bdevs_discovered": 0, 00:08:55.704 "num_base_bdevs_operational": 2, 00:08:55.704 "base_bdevs_list": [ 00:08:55.704 { 00:08:55.704 "name": "BaseBdev1", 00:08:55.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.704 "is_configured": false, 00:08:55.704 "data_offset": 0, 00:08:55.704 "data_size": 0 00:08:55.704 }, 00:08:55.704 { 00:08:55.704 "name": "BaseBdev2", 00:08:55.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.704 "is_configured": false, 00:08:55.704 "data_offset": 0, 00:08:55.704 "data_size": 0 00:08:55.704 } 00:08:55.704 ] 00:08:55.704 }' 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.704 16:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.964 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.964 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.964 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.964 [2024-11-08 16:50:25.334071] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.964 [2024-11-08 16:50:25.334194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:55.964 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.964 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 [2024-11-08 16:50:25.346099] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.965 [2024-11-08 16:50:25.346203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.965 [2024-11-08 16:50:25.346261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.965 [2024-11-08 16:50:25.346289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 [2024-11-08 16:50:25.367003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.965 BaseBdev1 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 [ 00:08:55.965 { 00:08:55.965 "name": "BaseBdev1", 00:08:55.965 "aliases": [ 00:08:55.965 "fc186923-57a7-4cc3-9753-600b83211f90" 00:08:55.965 ], 00:08:55.965 "product_name": "Malloc disk", 00:08:55.965 "block_size": 512, 00:08:55.965 "num_blocks": 65536, 00:08:55.965 "uuid": "fc186923-57a7-4cc3-9753-600b83211f90", 00:08:55.965 "assigned_rate_limits": { 00:08:55.965 "rw_ios_per_sec": 0, 00:08:55.965 "rw_mbytes_per_sec": 0, 00:08:55.965 "r_mbytes_per_sec": 0, 00:08:55.965 "w_mbytes_per_sec": 0 00:08:55.965 }, 00:08:55.965 "claimed": true, 00:08:55.965 "claim_type": "exclusive_write", 00:08:55.965 "zoned": false, 00:08:55.965 "supported_io_types": { 00:08:55.965 "read": true, 00:08:55.965 "write": true, 00:08:55.965 "unmap": true, 00:08:55.965 "flush": true, 00:08:55.965 "reset": true, 00:08:55.965 "nvme_admin": false, 00:08:55.965 "nvme_io": false, 00:08:55.965 "nvme_io_md": false, 00:08:55.965 "write_zeroes": true, 00:08:55.965 "zcopy": true, 00:08:55.965 "get_zone_info": false, 00:08:55.965 "zone_management": false, 00:08:55.965 "zone_append": false, 00:08:55.965 "compare": false, 00:08:55.965 "compare_and_write": false, 00:08:55.965 "abort": true, 00:08:55.965 "seek_hole": false, 00:08:55.965 "seek_data": false, 00:08:55.965 "copy": true, 00:08:55.965 "nvme_iov_md": false 00:08:55.965 }, 00:08:55.965 "memory_domains": [ 00:08:55.965 { 00:08:55.965 "dma_device_id": "system", 00:08:55.965 "dma_device_type": 1 00:08:55.965 }, 00:08:55.965 { 00:08:55.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.965 "dma_device_type": 2 00:08:55.965 } 00:08:55.965 ], 00:08:55.965 "driver_specific": {} 00:08:55.965 } 00:08:55.965 ] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.965 "name": "Existed_Raid", 00:08:55.965 "uuid": "2194163d-e287-4d6e-807a-abbd375ece08", 00:08:55.965 "strip_size_kb": 64, 00:08:55.965 "state": "configuring", 00:08:55.965 "raid_level": "raid0", 00:08:55.965 "superblock": true, 00:08:55.965 "num_base_bdevs": 2, 00:08:55.965 "num_base_bdevs_discovered": 1, 00:08:55.965 "num_base_bdevs_operational": 2, 00:08:55.965 "base_bdevs_list": [ 00:08:55.965 { 00:08:55.965 "name": "BaseBdev1", 00:08:55.965 "uuid": "fc186923-57a7-4cc3-9753-600b83211f90", 00:08:55.965 "is_configured": true, 00:08:55.965 "data_offset": 2048, 00:08:55.965 "data_size": 63488 00:08:55.965 }, 00:08:55.965 { 00:08:55.965 "name": "BaseBdev2", 00:08:55.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.965 "is_configured": false, 00:08:55.965 "data_offset": 0, 00:08:55.965 "data_size": 0 00:08:55.965 } 00:08:55.965 ] 00:08:55.965 }' 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.965 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.535 [2024-11-08 16:50:25.858227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.535 [2024-11-08 16:50:25.858350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.535 [2024-11-08 16:50:25.870228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.535 [2024-11-08 16:50:25.872096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.535 [2024-11-08 16:50:25.872175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.535 "name": "Existed_Raid", 00:08:56.535 "uuid": "d348b3ab-3108-43b7-9013-bafeb88fa296", 00:08:56.535 "strip_size_kb": 64, 00:08:56.535 "state": "configuring", 00:08:56.535 "raid_level": "raid0", 00:08:56.535 "superblock": true, 00:08:56.535 "num_base_bdevs": 2, 00:08:56.535 "num_base_bdevs_discovered": 1, 00:08:56.535 "num_base_bdevs_operational": 2, 00:08:56.535 "base_bdevs_list": [ 00:08:56.535 { 00:08:56.535 "name": "BaseBdev1", 00:08:56.535 "uuid": "fc186923-57a7-4cc3-9753-600b83211f90", 00:08:56.535 "is_configured": true, 00:08:56.535 "data_offset": 2048, 00:08:56.535 "data_size": 63488 00:08:56.535 }, 00:08:56.535 { 00:08:56.535 "name": "BaseBdev2", 00:08:56.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.535 "is_configured": false, 00:08:56.535 "data_offset": 0, 00:08:56.535 "data_size": 0 00:08:56.535 } 00:08:56.535 ] 00:08:56.535 }' 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.535 16:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 [2024-11-08 16:50:26.353153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.105 [2024-11-08 16:50:26.353493] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:57.105 BaseBdev2 00:08:57.105 [2024-11-08 16:50:26.353562] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:57.105 [2024-11-08 16:50:26.353952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:57.105 [2024-11-08 16:50:26.354120] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:57.105 [2024-11-08 16:50:26.354139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:57.105 [2024-11-08 16:50:26.354287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 [ 00:08:57.105 { 00:08:57.105 "name": "BaseBdev2", 00:08:57.105 "aliases": [ 00:08:57.105 "d577a766-d590-4343-b9f0-e2730b99787f" 00:08:57.105 ], 00:08:57.105 "product_name": "Malloc disk", 00:08:57.105 "block_size": 512, 00:08:57.105 "num_blocks": 65536, 00:08:57.105 "uuid": "d577a766-d590-4343-b9f0-e2730b99787f", 00:08:57.105 "assigned_rate_limits": { 00:08:57.105 "rw_ios_per_sec": 0, 00:08:57.105 "rw_mbytes_per_sec": 0, 00:08:57.105 "r_mbytes_per_sec": 0, 00:08:57.105 "w_mbytes_per_sec": 0 00:08:57.105 }, 00:08:57.105 "claimed": true, 00:08:57.105 "claim_type": "exclusive_write", 00:08:57.105 "zoned": false, 00:08:57.105 "supported_io_types": { 00:08:57.105 "read": true, 00:08:57.105 "write": true, 00:08:57.105 "unmap": true, 00:08:57.105 "flush": true, 00:08:57.105 "reset": true, 00:08:57.105 "nvme_admin": false, 00:08:57.105 "nvme_io": false, 00:08:57.105 "nvme_io_md": false, 00:08:57.105 "write_zeroes": true, 00:08:57.105 "zcopy": true, 00:08:57.105 "get_zone_info": false, 00:08:57.105 "zone_management": false, 00:08:57.105 "zone_append": false, 00:08:57.105 "compare": false, 00:08:57.105 "compare_and_write": false, 00:08:57.105 "abort": true, 00:08:57.105 "seek_hole": false, 00:08:57.105 "seek_data": false, 00:08:57.105 "copy": true, 00:08:57.105 "nvme_iov_md": false 00:08:57.105 }, 00:08:57.105 "memory_domains": [ 00:08:57.105 { 00:08:57.105 "dma_device_id": "system", 00:08:57.105 "dma_device_type": 1 00:08:57.105 }, 00:08:57.105 { 00:08:57.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.105 "dma_device_type": 2 00:08:57.105 } 00:08:57.105 ], 00:08:57.105 "driver_specific": {} 00:08:57.105 } 00:08:57.105 ] 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.105 "name": "Existed_Raid", 00:08:57.105 "uuid": "d348b3ab-3108-43b7-9013-bafeb88fa296", 00:08:57.105 "strip_size_kb": 64, 00:08:57.105 "state": "online", 00:08:57.105 "raid_level": "raid0", 00:08:57.105 "superblock": true, 00:08:57.105 "num_base_bdevs": 2, 00:08:57.105 "num_base_bdevs_discovered": 2, 00:08:57.105 "num_base_bdevs_operational": 2, 00:08:57.105 "base_bdevs_list": [ 00:08:57.105 { 00:08:57.105 "name": "BaseBdev1", 00:08:57.105 "uuid": "fc186923-57a7-4cc3-9753-600b83211f90", 00:08:57.105 "is_configured": true, 00:08:57.105 "data_offset": 2048, 00:08:57.105 "data_size": 63488 00:08:57.105 }, 00:08:57.105 { 00:08:57.105 "name": "BaseBdev2", 00:08:57.105 "uuid": "d577a766-d590-4343-b9f0-e2730b99787f", 00:08:57.105 "is_configured": true, 00:08:57.105 "data_offset": 2048, 00:08:57.105 "data_size": 63488 00:08:57.105 } 00:08:57.105 ] 00:08:57.105 }' 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.105 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.365 [2024-11-08 16:50:26.864719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.365 16:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.626 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.626 "name": "Existed_Raid", 00:08:57.626 "aliases": [ 00:08:57.626 "d348b3ab-3108-43b7-9013-bafeb88fa296" 00:08:57.626 ], 00:08:57.626 "product_name": "Raid Volume", 00:08:57.626 "block_size": 512, 00:08:57.626 "num_blocks": 126976, 00:08:57.626 "uuid": "d348b3ab-3108-43b7-9013-bafeb88fa296", 00:08:57.626 "assigned_rate_limits": { 00:08:57.626 "rw_ios_per_sec": 0, 00:08:57.626 "rw_mbytes_per_sec": 0, 00:08:57.626 "r_mbytes_per_sec": 0, 00:08:57.626 "w_mbytes_per_sec": 0 00:08:57.626 }, 00:08:57.626 "claimed": false, 00:08:57.626 "zoned": false, 00:08:57.626 "supported_io_types": { 00:08:57.626 "read": true, 00:08:57.626 "write": true, 00:08:57.626 "unmap": true, 00:08:57.626 "flush": true, 00:08:57.626 "reset": true, 00:08:57.626 "nvme_admin": false, 00:08:57.626 "nvme_io": false, 00:08:57.626 "nvme_io_md": false, 00:08:57.626 "write_zeroes": true, 00:08:57.626 "zcopy": false, 00:08:57.626 "get_zone_info": false, 00:08:57.626 "zone_management": false, 00:08:57.626 "zone_append": false, 00:08:57.626 "compare": false, 00:08:57.626 "compare_and_write": false, 00:08:57.626 "abort": false, 00:08:57.626 "seek_hole": false, 00:08:57.626 "seek_data": false, 00:08:57.626 "copy": false, 00:08:57.626 "nvme_iov_md": false 00:08:57.626 }, 00:08:57.626 "memory_domains": [ 00:08:57.626 { 00:08:57.626 "dma_device_id": "system", 00:08:57.626 "dma_device_type": 1 00:08:57.626 }, 00:08:57.626 { 00:08:57.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.626 "dma_device_type": 2 00:08:57.626 }, 00:08:57.626 { 00:08:57.626 "dma_device_id": "system", 00:08:57.626 "dma_device_type": 1 00:08:57.626 }, 00:08:57.626 { 00:08:57.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.626 "dma_device_type": 2 00:08:57.626 } 00:08:57.626 ], 00:08:57.626 "driver_specific": { 00:08:57.626 "raid": { 00:08:57.626 "uuid": "d348b3ab-3108-43b7-9013-bafeb88fa296", 00:08:57.626 "strip_size_kb": 64, 00:08:57.626 "state": "online", 00:08:57.626 "raid_level": "raid0", 00:08:57.626 "superblock": true, 00:08:57.626 "num_base_bdevs": 2, 00:08:57.626 "num_base_bdevs_discovered": 2, 00:08:57.626 "num_base_bdevs_operational": 2, 00:08:57.626 "base_bdevs_list": [ 00:08:57.626 { 00:08:57.626 "name": "BaseBdev1", 00:08:57.626 "uuid": "fc186923-57a7-4cc3-9753-600b83211f90", 00:08:57.626 "is_configured": true, 00:08:57.626 "data_offset": 2048, 00:08:57.626 "data_size": 63488 00:08:57.626 }, 00:08:57.626 { 00:08:57.626 "name": "BaseBdev2", 00:08:57.626 "uuid": "d577a766-d590-4343-b9f0-e2730b99787f", 00:08:57.626 "is_configured": true, 00:08:57.626 "data_offset": 2048, 00:08:57.626 "data_size": 63488 00:08:57.626 } 00:08:57.626 ] 00:08:57.626 } 00:08:57.626 } 00:08:57.626 }' 00:08:57.626 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.626 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.626 BaseBdev2' 00:08:57.626 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.626 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.626 16:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.626 [2024-11-08 16:50:27.087981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.626 [2024-11-08 16:50:27.088067] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.626 [2024-11-08 16:50:27.088155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.626 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.627 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.627 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.627 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.627 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.886 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.886 "name": "Existed_Raid", 00:08:57.886 "uuid": "d348b3ab-3108-43b7-9013-bafeb88fa296", 00:08:57.886 "strip_size_kb": 64, 00:08:57.886 "state": "offline", 00:08:57.886 "raid_level": "raid0", 00:08:57.886 "superblock": true, 00:08:57.886 "num_base_bdevs": 2, 00:08:57.886 "num_base_bdevs_discovered": 1, 00:08:57.886 "num_base_bdevs_operational": 1, 00:08:57.886 "base_bdevs_list": [ 00:08:57.886 { 00:08:57.886 "name": null, 00:08:57.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.886 "is_configured": false, 00:08:57.886 "data_offset": 0, 00:08:57.886 "data_size": 63488 00:08:57.886 }, 00:08:57.886 { 00:08:57.886 "name": "BaseBdev2", 00:08:57.886 "uuid": "d577a766-d590-4343-b9f0-e2730b99787f", 00:08:57.886 "is_configured": true, 00:08:57.886 "data_offset": 2048, 00:08:57.886 "data_size": 63488 00:08:57.886 } 00:08:57.886 ] 00:08:57.886 }' 00:08:57.886 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.886 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.146 [2024-11-08 16:50:27.590596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.146 [2024-11-08 16:50:27.590747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72356 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72356 ']' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72356 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.146 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72356 00:08:58.406 killing process with pid 72356 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72356' 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72356 00:08:58.406 [2024-11-08 16:50:27.685547] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72356 00:08:58.406 [2024-11-08 16:50:27.686511] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:58.406 00:08:58.406 real 0m3.985s 00:08:58.406 user 0m6.279s 00:08:58.406 sys 0m0.771s 00:08:58.406 ************************************ 00:08:58.406 END TEST raid_state_function_test_sb 00:08:58.406 ************************************ 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.406 16:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.668 16:50:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:58.668 16:50:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:58.668 16:50:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.668 16:50:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.668 ************************************ 00:08:58.668 START TEST raid_superblock_test 00:08:58.668 ************************************ 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72597 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72597 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72597 ']' 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.668 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.668 [2024-11-08 16:50:28.085743] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:58.668 [2024-11-08 16:50:28.085959] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72597 ] 00:08:58.971 [2024-11-08 16:50:28.246622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.971 [2024-11-08 16:50:28.297315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.971 [2024-11-08 16:50:28.339050] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.971 [2024-11-08 16:50:28.339169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.555 malloc1 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.555 [2024-11-08 16:50:28.941342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.555 [2024-11-08 16:50:28.941536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.555 [2024-11-08 16:50:28.941582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:59.555 [2024-11-08 16:50:28.941639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.555 [2024-11-08 16:50:28.943757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.555 [2024-11-08 16:50:28.943841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.555 pt1 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.555 malloc2 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.555 [2024-11-08 16:50:28.980525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.555 [2024-11-08 16:50:28.980698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.555 [2024-11-08 16:50:28.980741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:59.555 [2024-11-08 16:50:28.980786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.555 [2024-11-08 16:50:28.982913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.555 [2024-11-08 16:50:28.982982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.555 pt2 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.555 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.556 [2024-11-08 16:50:28.992551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.556 [2024-11-08 16:50:28.994364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.556 [2024-11-08 16:50:28.994549] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:59.556 [2024-11-08 16:50:28.994601] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:59.556 [2024-11-08 16:50:28.994882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:59.556 [2024-11-08 16:50:28.995047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:59.556 [2024-11-08 16:50:28.995089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:59.556 [2024-11-08 16:50:28.995257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.556 16:50:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.556 "name": "raid_bdev1", 00:08:59.556 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:08:59.556 "strip_size_kb": 64, 00:08:59.556 "state": "online", 00:08:59.556 "raid_level": "raid0", 00:08:59.556 "superblock": true, 00:08:59.556 "num_base_bdevs": 2, 00:08:59.556 "num_base_bdevs_discovered": 2, 00:08:59.556 "num_base_bdevs_operational": 2, 00:08:59.556 "base_bdevs_list": [ 00:08:59.556 { 00:08:59.556 "name": "pt1", 00:08:59.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.556 "is_configured": true, 00:08:59.556 "data_offset": 2048, 00:08:59.556 "data_size": 63488 00:08:59.556 }, 00:08:59.556 { 00:08:59.556 "name": "pt2", 00:08:59.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.556 "is_configured": true, 00:08:59.556 "data_offset": 2048, 00:08:59.556 "data_size": 63488 00:08:59.556 } 00:08:59.556 ] 00:08:59.556 }' 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.556 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.125 [2024-11-08 16:50:29.372177] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.125 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.125 "name": "raid_bdev1", 00:09:00.125 "aliases": [ 00:09:00.125 "10853805-4adc-4e87-ac4f-3d69fcd27398" 00:09:00.125 ], 00:09:00.125 "product_name": "Raid Volume", 00:09:00.125 "block_size": 512, 00:09:00.125 "num_blocks": 126976, 00:09:00.125 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:09:00.125 "assigned_rate_limits": { 00:09:00.125 "rw_ios_per_sec": 0, 00:09:00.125 "rw_mbytes_per_sec": 0, 00:09:00.125 "r_mbytes_per_sec": 0, 00:09:00.125 "w_mbytes_per_sec": 0 00:09:00.125 }, 00:09:00.125 "claimed": false, 00:09:00.125 "zoned": false, 00:09:00.125 "supported_io_types": { 00:09:00.125 "read": true, 00:09:00.125 "write": true, 00:09:00.125 "unmap": true, 00:09:00.125 "flush": true, 00:09:00.125 "reset": true, 00:09:00.125 "nvme_admin": false, 00:09:00.125 "nvme_io": false, 00:09:00.125 "nvme_io_md": false, 00:09:00.125 "write_zeroes": true, 00:09:00.125 "zcopy": false, 00:09:00.125 "get_zone_info": false, 00:09:00.126 "zone_management": false, 00:09:00.126 "zone_append": false, 00:09:00.126 "compare": false, 00:09:00.126 "compare_and_write": false, 00:09:00.126 "abort": false, 00:09:00.126 "seek_hole": false, 00:09:00.126 "seek_data": false, 00:09:00.126 "copy": false, 00:09:00.126 "nvme_iov_md": false 00:09:00.126 }, 00:09:00.126 "memory_domains": [ 00:09:00.126 { 00:09:00.126 "dma_device_id": "system", 00:09:00.126 "dma_device_type": 1 00:09:00.126 }, 00:09:00.126 { 00:09:00.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.126 "dma_device_type": 2 00:09:00.126 }, 00:09:00.126 { 00:09:00.126 "dma_device_id": "system", 00:09:00.126 "dma_device_type": 1 00:09:00.126 }, 00:09:00.126 { 00:09:00.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.126 "dma_device_type": 2 00:09:00.126 } 00:09:00.126 ], 00:09:00.126 "driver_specific": { 00:09:00.126 "raid": { 00:09:00.126 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:09:00.126 "strip_size_kb": 64, 00:09:00.126 "state": "online", 00:09:00.126 "raid_level": "raid0", 00:09:00.126 "superblock": true, 00:09:00.126 "num_base_bdevs": 2, 00:09:00.126 "num_base_bdevs_discovered": 2, 00:09:00.126 "num_base_bdevs_operational": 2, 00:09:00.126 "base_bdevs_list": [ 00:09:00.126 { 00:09:00.126 "name": "pt1", 00:09:00.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.126 "is_configured": true, 00:09:00.126 "data_offset": 2048, 00:09:00.126 "data_size": 63488 00:09:00.126 }, 00:09:00.126 { 00:09:00.126 "name": "pt2", 00:09:00.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.126 "is_configured": true, 00:09:00.126 "data_offset": 2048, 00:09:00.126 "data_size": 63488 00:09:00.126 } 00:09:00.126 ] 00:09:00.126 } 00:09:00.126 } 00:09:00.126 }' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.126 pt2' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:00.126 [2024-11-08 16:50:29.555759] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=10853805-4adc-4e87-ac4f-3d69fcd27398 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 10853805-4adc-4e87-ac4f-3d69fcd27398 ']' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.126 [2024-11-08 16:50:29.603422] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.126 [2024-11-08 16:50:29.603497] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.126 [2024-11-08 16:50:29.603596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.126 [2024-11-08 16:50:29.603695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.126 [2024-11-08 16:50:29.603752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:00.126 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.385 [2024-11-08 16:50:29.731329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:00.385 [2024-11-08 16:50:29.733216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:00.385 [2024-11-08 16:50:29.733332] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:00.385 [2024-11-08 16:50:29.733425] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:00.385 [2024-11-08 16:50:29.733482] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.385 [2024-11-08 16:50:29.733518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:00.385 request: 00:09:00.385 { 00:09:00.385 "name": "raid_bdev1", 00:09:00.385 "raid_level": "raid0", 00:09:00.385 "base_bdevs": [ 00:09:00.385 "malloc1", 00:09:00.385 "malloc2" 00:09:00.385 ], 00:09:00.385 "strip_size_kb": 64, 00:09:00.385 "superblock": false, 00:09:00.385 "method": "bdev_raid_create", 00:09:00.385 "req_id": 1 00:09:00.385 } 00:09:00.385 Got JSON-RPC error response 00:09:00.385 response: 00:09:00.385 { 00:09:00.385 "code": -17, 00:09:00.385 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:00.385 } 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:00.385 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.386 [2024-11-08 16:50:29.799268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.386 [2024-11-08 16:50:29.799373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.386 [2024-11-08 16:50:29.799407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:00.386 [2024-11-08 16:50:29.799435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.386 [2024-11-08 16:50:29.801568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.386 [2024-11-08 16:50:29.801663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.386 [2024-11-08 16:50:29.801755] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:00.386 [2024-11-08 16:50:29.801823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.386 pt1 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.386 "name": "raid_bdev1", 00:09:00.386 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:09:00.386 "strip_size_kb": 64, 00:09:00.386 "state": "configuring", 00:09:00.386 "raid_level": "raid0", 00:09:00.386 "superblock": true, 00:09:00.386 "num_base_bdevs": 2, 00:09:00.386 "num_base_bdevs_discovered": 1, 00:09:00.386 "num_base_bdevs_operational": 2, 00:09:00.386 "base_bdevs_list": [ 00:09:00.386 { 00:09:00.386 "name": "pt1", 00:09:00.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.386 "is_configured": true, 00:09:00.386 "data_offset": 2048, 00:09:00.386 "data_size": 63488 00:09:00.386 }, 00:09:00.386 { 00:09:00.386 "name": null, 00:09:00.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.386 "is_configured": false, 00:09:00.386 "data_offset": 2048, 00:09:00.386 "data_size": 63488 00:09:00.386 } 00:09:00.386 ] 00:09:00.386 }' 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.386 16:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.956 [2024-11-08 16:50:30.238554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.956 [2024-11-08 16:50:30.238687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.956 [2024-11-08 16:50:30.238731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:00.956 [2024-11-08 16:50:30.238758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.956 [2024-11-08 16:50:30.239180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.956 [2024-11-08 16:50:30.239256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.956 [2024-11-08 16:50:30.239354] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.956 [2024-11-08 16:50:30.239404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.956 [2024-11-08 16:50:30.239518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:00.956 [2024-11-08 16:50:30.239557] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:00.956 [2024-11-08 16:50:30.239820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:00.956 [2024-11-08 16:50:30.239969] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:00.956 [2024-11-08 16:50:30.240018] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:00.956 [2024-11-08 16:50:30.240154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.956 pt2 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.956 "name": "raid_bdev1", 00:09:00.956 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:09:00.956 "strip_size_kb": 64, 00:09:00.956 "state": "online", 00:09:00.956 "raid_level": "raid0", 00:09:00.956 "superblock": true, 00:09:00.956 "num_base_bdevs": 2, 00:09:00.956 "num_base_bdevs_discovered": 2, 00:09:00.956 "num_base_bdevs_operational": 2, 00:09:00.956 "base_bdevs_list": [ 00:09:00.956 { 00:09:00.956 "name": "pt1", 00:09:00.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.956 "is_configured": true, 00:09:00.956 "data_offset": 2048, 00:09:00.956 "data_size": 63488 00:09:00.956 }, 00:09:00.956 { 00:09:00.956 "name": "pt2", 00:09:00.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.956 "is_configured": true, 00:09:00.956 "data_offset": 2048, 00:09:00.956 "data_size": 63488 00:09:00.956 } 00:09:00.956 ] 00:09:00.956 }' 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.956 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.216 [2024-11-08 16:50:30.678086] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.216 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.216 "name": "raid_bdev1", 00:09:01.216 "aliases": [ 00:09:01.216 "10853805-4adc-4e87-ac4f-3d69fcd27398" 00:09:01.216 ], 00:09:01.216 "product_name": "Raid Volume", 00:09:01.216 "block_size": 512, 00:09:01.216 "num_blocks": 126976, 00:09:01.216 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:09:01.216 "assigned_rate_limits": { 00:09:01.216 "rw_ios_per_sec": 0, 00:09:01.216 "rw_mbytes_per_sec": 0, 00:09:01.216 "r_mbytes_per_sec": 0, 00:09:01.216 "w_mbytes_per_sec": 0 00:09:01.216 }, 00:09:01.216 "claimed": false, 00:09:01.216 "zoned": false, 00:09:01.216 "supported_io_types": { 00:09:01.216 "read": true, 00:09:01.216 "write": true, 00:09:01.216 "unmap": true, 00:09:01.216 "flush": true, 00:09:01.216 "reset": true, 00:09:01.216 "nvme_admin": false, 00:09:01.216 "nvme_io": false, 00:09:01.216 "nvme_io_md": false, 00:09:01.216 "write_zeroes": true, 00:09:01.216 "zcopy": false, 00:09:01.216 "get_zone_info": false, 00:09:01.216 "zone_management": false, 00:09:01.216 "zone_append": false, 00:09:01.216 "compare": false, 00:09:01.216 "compare_and_write": false, 00:09:01.216 "abort": false, 00:09:01.216 "seek_hole": false, 00:09:01.216 "seek_data": false, 00:09:01.216 "copy": false, 00:09:01.216 "nvme_iov_md": false 00:09:01.216 }, 00:09:01.216 "memory_domains": [ 00:09:01.216 { 00:09:01.216 "dma_device_id": "system", 00:09:01.216 "dma_device_type": 1 00:09:01.216 }, 00:09:01.216 { 00:09:01.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.216 "dma_device_type": 2 00:09:01.216 }, 00:09:01.216 { 00:09:01.216 "dma_device_id": "system", 00:09:01.216 "dma_device_type": 1 00:09:01.216 }, 00:09:01.216 { 00:09:01.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.216 "dma_device_type": 2 00:09:01.216 } 00:09:01.216 ], 00:09:01.216 "driver_specific": { 00:09:01.217 "raid": { 00:09:01.217 "uuid": "10853805-4adc-4e87-ac4f-3d69fcd27398", 00:09:01.217 "strip_size_kb": 64, 00:09:01.217 "state": "online", 00:09:01.217 "raid_level": "raid0", 00:09:01.217 "superblock": true, 00:09:01.217 "num_base_bdevs": 2, 00:09:01.217 "num_base_bdevs_discovered": 2, 00:09:01.217 "num_base_bdevs_operational": 2, 00:09:01.217 "base_bdevs_list": [ 00:09:01.217 { 00:09:01.217 "name": "pt1", 00:09:01.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.217 "is_configured": true, 00:09:01.217 "data_offset": 2048, 00:09:01.217 "data_size": 63488 00:09:01.217 }, 00:09:01.217 { 00:09:01.217 "name": "pt2", 00:09:01.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.217 "is_configured": true, 00:09:01.217 "data_offset": 2048, 00:09:01.217 "data_size": 63488 00:09:01.217 } 00:09:01.217 ] 00:09:01.217 } 00:09:01.217 } 00:09:01.217 }' 00:09:01.217 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.476 pt2' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.476 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:01.477 [2024-11-08 16:50:30.925614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 10853805-4adc-4e87-ac4f-3d69fcd27398 '!=' 10853805-4adc-4e87-ac4f-3d69fcd27398 ']' 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72597 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72597 ']' 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72597 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.477 16:50:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72597 00:09:01.735 killing process with pid 72597 00:09:01.735 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.735 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.735 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72597' 00:09:01.735 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72597 00:09:01.735 [2024-11-08 16:50:31.013798] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.735 [2024-11-08 16:50:31.013879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.735 [2024-11-08 16:50:31.013930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.735 [2024-11-08 16:50:31.013940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:01.735 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72597 00:09:01.735 [2024-11-08 16:50:31.036317] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.995 16:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:01.995 00:09:01.995 real 0m3.279s 00:09:01.995 user 0m5.018s 00:09:01.995 sys 0m0.648s 00:09:01.995 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.995 ************************************ 00:09:01.995 END TEST raid_superblock_test 00:09:01.995 ************************************ 00:09:01.995 16:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.995 16:50:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:01.995 16:50:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.995 16:50:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.995 16:50:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.995 ************************************ 00:09:01.995 START TEST raid_read_error_test 00:09:01.995 ************************************ 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Mg2MkVQ2sM 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72792 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72792 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72792 ']' 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.995 16:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.995 [2024-11-08 16:50:31.449831] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:01.995 [2024-11-08 16:50:31.450047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72792 ] 00:09:02.254 [2024-11-08 16:50:31.609696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.254 [2024-11-08 16:50:31.654893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.254 [2024-11-08 16:50:31.697324] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.254 [2024-11-08 16:50:31.697361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.823 BaseBdev1_malloc 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.823 true 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.823 [2024-11-08 16:50:32.331405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.823 [2024-11-08 16:50:32.331558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.823 [2024-11-08 16:50:32.331599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.823 [2024-11-08 16:50:32.331615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.823 [2024-11-08 16:50:32.333755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.823 [2024-11-08 16:50:32.333794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.823 BaseBdev1 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.823 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 BaseBdev2_malloc 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 true 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 [2024-11-08 16:50:32.383354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.083 [2024-11-08 16:50:32.383414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.083 [2024-11-08 16:50:32.383433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.083 [2024-11-08 16:50:32.383442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.083 [2024-11-08 16:50:32.385487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.083 [2024-11-08 16:50:32.385525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.083 BaseBdev2 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 [2024-11-08 16:50:32.395363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.083 [2024-11-08 16:50:32.397235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.083 [2024-11-08 16:50:32.397476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:03.083 [2024-11-08 16:50:32.397524] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:03.083 [2024-11-08 16:50:32.397832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:03.083 [2024-11-08 16:50:32.398008] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:03.083 [2024-11-08 16:50:32.398057] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:03.083 [2024-11-08 16:50:32.398241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.083 "name": "raid_bdev1", 00:09:03.083 "uuid": "f41daf2a-d161-4865-a3a2-99d25c8dd109", 00:09:03.083 "strip_size_kb": 64, 00:09:03.083 "state": "online", 00:09:03.083 "raid_level": "raid0", 00:09:03.083 "superblock": true, 00:09:03.083 "num_base_bdevs": 2, 00:09:03.083 "num_base_bdevs_discovered": 2, 00:09:03.084 "num_base_bdevs_operational": 2, 00:09:03.084 "base_bdevs_list": [ 00:09:03.084 { 00:09:03.084 "name": "BaseBdev1", 00:09:03.084 "uuid": "b5f8fe11-85bc-5404-b314-fba0160739ec", 00:09:03.084 "is_configured": true, 00:09:03.084 "data_offset": 2048, 00:09:03.084 "data_size": 63488 00:09:03.084 }, 00:09:03.084 { 00:09:03.084 "name": "BaseBdev2", 00:09:03.084 "uuid": "b4dc8171-c46a-5dd3-9725-ba701e319828", 00:09:03.084 "is_configured": true, 00:09:03.084 "data_offset": 2048, 00:09:03.084 "data_size": 63488 00:09:03.084 } 00:09:03.084 ] 00:09:03.084 }' 00:09:03.084 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.084 16:50:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.343 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.343 16:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.602 [2024-11-08 16:50:32.910966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.543 "name": "raid_bdev1", 00:09:04.543 "uuid": "f41daf2a-d161-4865-a3a2-99d25c8dd109", 00:09:04.543 "strip_size_kb": 64, 00:09:04.543 "state": "online", 00:09:04.543 "raid_level": "raid0", 00:09:04.543 "superblock": true, 00:09:04.543 "num_base_bdevs": 2, 00:09:04.543 "num_base_bdevs_discovered": 2, 00:09:04.543 "num_base_bdevs_operational": 2, 00:09:04.543 "base_bdevs_list": [ 00:09:04.543 { 00:09:04.543 "name": "BaseBdev1", 00:09:04.543 "uuid": "b5f8fe11-85bc-5404-b314-fba0160739ec", 00:09:04.543 "is_configured": true, 00:09:04.543 "data_offset": 2048, 00:09:04.543 "data_size": 63488 00:09:04.543 }, 00:09:04.543 { 00:09:04.543 "name": "BaseBdev2", 00:09:04.543 "uuid": "b4dc8171-c46a-5dd3-9725-ba701e319828", 00:09:04.543 "is_configured": true, 00:09:04.543 "data_offset": 2048, 00:09:04.543 "data_size": 63488 00:09:04.543 } 00:09:04.543 ] 00:09:04.543 }' 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.543 16:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.803 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.803 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.803 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.803 [2024-11-08 16:50:34.218659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.803 [2024-11-08 16:50:34.218778] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.803 [2024-11-08 16:50:34.221248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.803 [2024-11-08 16:50:34.221340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.803 [2024-11-08 16:50:34.221395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.803 [2024-11-08 16:50:34.221433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:04.803 { 00:09:04.803 "results": [ 00:09:04.803 { 00:09:04.803 "job": "raid_bdev1", 00:09:04.803 "core_mask": "0x1", 00:09:04.803 "workload": "randrw", 00:09:04.803 "percentage": 50, 00:09:04.803 "status": "finished", 00:09:04.803 "queue_depth": 1, 00:09:04.803 "io_size": 131072, 00:09:04.803 "runtime": 1.308416, 00:09:04.804 "iops": 17484.500342398747, 00:09:04.804 "mibps": 2185.5625427998434, 00:09:04.804 "io_failed": 1, 00:09:04.804 "io_timeout": 0, 00:09:04.804 "avg_latency_us": 79.02913063445327, 00:09:04.804 "min_latency_us": 24.482096069868994, 00:09:04.804 "max_latency_us": 1402.2986899563318 00:09:04.804 } 00:09:04.804 ], 00:09:04.804 "core_count": 1 00:09:04.804 } 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72792 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72792 ']' 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72792 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72792 00:09:04.804 killing process with pid 72792 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72792' 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72792 00:09:04.804 [2024-11-08 16:50:34.267473] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.804 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72792 00:09:04.804 [2024-11-08 16:50:34.282441] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Mg2MkVQ2sM 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:05.064 00:09:05.064 real 0m3.173s 00:09:05.064 user 0m3.972s 00:09:05.064 sys 0m0.504s 00:09:05.064 ************************************ 00:09:05.064 END TEST raid_read_error_test 00:09:05.064 ************************************ 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.064 16:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.064 16:50:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:05.064 16:50:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:05.064 16:50:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.064 16:50:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.323 ************************************ 00:09:05.323 START TEST raid_write_error_test 00:09:05.323 ************************************ 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VdjS8wzPof 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72921 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72921 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72921 ']' 00:09:05.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.323 16:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.323 [2024-11-08 16:50:34.703761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:05.324 [2024-11-08 16:50:34.703903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72921 ] 00:09:05.583 [2024-11-08 16:50:34.863426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.583 [2024-11-08 16:50:34.913245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.583 [2024-11-08 16:50:34.955288] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.583 [2024-11-08 16:50:34.955341] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 BaseBdev1_malloc 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 true 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 [2024-11-08 16:50:35.573780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:06.216 [2024-11-08 16:50:35.573942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.216 [2024-11-08 16:50:35.573980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:06.216 [2024-11-08 16:50:35.574008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.216 [2024-11-08 16:50:35.576131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.216 [2024-11-08 16:50:35.576207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:06.216 BaseBdev1 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 BaseBdev2_malloc 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 true 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 [2024-11-08 16:50:35.624151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:06.216 [2024-11-08 16:50:35.624318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.216 [2024-11-08 16:50:35.624358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:06.216 [2024-11-08 16:50:35.624394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.216 [2024-11-08 16:50:35.626541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.216 [2024-11-08 16:50:35.626620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:06.216 BaseBdev2 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 [2024-11-08 16:50:35.636168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.216 [2024-11-08 16:50:35.638083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.216 [2024-11-08 16:50:35.638257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:06.216 [2024-11-08 16:50:35.638271] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:06.216 [2024-11-08 16:50:35.638546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:06.216 [2024-11-08 16:50:35.638694] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:06.216 [2024-11-08 16:50:35.638714] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:06.216 [2024-11-08 16:50:35.638845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.216 "name": "raid_bdev1", 00:09:06.216 "uuid": "106769ca-11c6-4c64-ae38-ff8690f13dd8", 00:09:06.216 "strip_size_kb": 64, 00:09:06.216 "state": "online", 00:09:06.216 "raid_level": "raid0", 00:09:06.216 "superblock": true, 00:09:06.216 "num_base_bdevs": 2, 00:09:06.216 "num_base_bdevs_discovered": 2, 00:09:06.216 "num_base_bdevs_operational": 2, 00:09:06.216 "base_bdevs_list": [ 00:09:06.216 { 00:09:06.216 "name": "BaseBdev1", 00:09:06.216 "uuid": "1e3320b6-e823-52c5-9b19-374bc877cee2", 00:09:06.216 "is_configured": true, 00:09:06.216 "data_offset": 2048, 00:09:06.216 "data_size": 63488 00:09:06.216 }, 00:09:06.216 { 00:09:06.216 "name": "BaseBdev2", 00:09:06.216 "uuid": "b68c585c-1e1f-5a61-86f2-6c7286396817", 00:09:06.216 "is_configured": true, 00:09:06.216 "data_offset": 2048, 00:09:06.216 "data_size": 63488 00:09:06.216 } 00:09:06.216 ] 00:09:06.216 }' 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.216 16:50:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.785 16:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:06.786 16:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:06.786 [2024-11-08 16:50:36.179729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:07.724 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.725 "name": "raid_bdev1", 00:09:07.725 "uuid": "106769ca-11c6-4c64-ae38-ff8690f13dd8", 00:09:07.725 "strip_size_kb": 64, 00:09:07.725 "state": "online", 00:09:07.725 "raid_level": "raid0", 00:09:07.725 "superblock": true, 00:09:07.725 "num_base_bdevs": 2, 00:09:07.725 "num_base_bdevs_discovered": 2, 00:09:07.725 "num_base_bdevs_operational": 2, 00:09:07.725 "base_bdevs_list": [ 00:09:07.725 { 00:09:07.725 "name": "BaseBdev1", 00:09:07.725 "uuid": "1e3320b6-e823-52c5-9b19-374bc877cee2", 00:09:07.725 "is_configured": true, 00:09:07.725 "data_offset": 2048, 00:09:07.725 "data_size": 63488 00:09:07.725 }, 00:09:07.725 { 00:09:07.725 "name": "BaseBdev2", 00:09:07.725 "uuid": "b68c585c-1e1f-5a61-86f2-6c7286396817", 00:09:07.725 "is_configured": true, 00:09:07.725 "data_offset": 2048, 00:09:07.725 "data_size": 63488 00:09:07.725 } 00:09:07.725 ] 00:09:07.725 }' 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.725 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.294 [2024-11-08 16:50:37.519425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.294 [2024-11-08 16:50:37.519540] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.294 [2024-11-08 16:50:37.521985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.294 [2024-11-08 16:50:37.522034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.294 [2024-11-08 16:50:37.522070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.294 [2024-11-08 16:50:37.522079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:08.294 { 00:09:08.294 "results": [ 00:09:08.294 { 00:09:08.294 "job": "raid_bdev1", 00:09:08.294 "core_mask": "0x1", 00:09:08.294 "workload": "randrw", 00:09:08.294 "percentage": 50, 00:09:08.294 "status": "finished", 00:09:08.294 "queue_depth": 1, 00:09:08.294 "io_size": 131072, 00:09:08.294 "runtime": 1.340489, 00:09:08.294 "iops": 16836.393286330585, 00:09:08.294 "mibps": 2104.549160791323, 00:09:08.294 "io_failed": 1, 00:09:08.294 "io_timeout": 0, 00:09:08.294 "avg_latency_us": 82.35876119515608, 00:09:08.294 "min_latency_us": 24.817467248908297, 00:09:08.294 "max_latency_us": 1531.0812227074236 00:09:08.294 } 00:09:08.294 ], 00:09:08.294 "core_count": 1 00:09:08.294 } 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72921 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72921 ']' 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72921 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72921 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72921' 00:09:08.294 killing process with pid 72921 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72921 00:09:08.294 [2024-11-08 16:50:37.570350] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.294 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72921 00:09:08.295 [2024-11-08 16:50:37.586482] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VdjS8wzPof 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:08.554 00:09:08.554 real 0m3.236s 00:09:08.554 user 0m4.074s 00:09:08.554 sys 0m0.530s 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.554 ************************************ 00:09:08.554 END TEST raid_write_error_test 00:09:08.554 ************************************ 00:09:08.554 16:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.554 16:50:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:08.554 16:50:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:08.554 16:50:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.554 16:50:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.554 16:50:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.554 ************************************ 00:09:08.554 START TEST raid_state_function_test 00:09:08.554 ************************************ 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73048 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73048' 00:09:08.554 Process raid pid: 73048 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73048 00:09:08.554 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73048 ']' 00:09:08.555 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.555 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.555 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.555 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.555 16:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.555 [2024-11-08 16:50:38.001791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:08.555 [2024-11-08 16:50:38.002017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.814 [2024-11-08 16:50:38.164703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.814 [2024-11-08 16:50:38.212182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.814 [2024-11-08 16:50:38.255261] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.814 [2024-11-08 16:50:38.255294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.383 [2024-11-08 16:50:38.837003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.383 [2024-11-08 16:50:38.837151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.383 [2024-11-08 16:50:38.837194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.383 [2024-11-08 16:50:38.837234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.383 "name": "Existed_Raid", 00:09:09.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.383 "strip_size_kb": 64, 00:09:09.383 "state": "configuring", 00:09:09.383 "raid_level": "concat", 00:09:09.383 "superblock": false, 00:09:09.383 "num_base_bdevs": 2, 00:09:09.383 "num_base_bdevs_discovered": 0, 00:09:09.383 "num_base_bdevs_operational": 2, 00:09:09.383 "base_bdevs_list": [ 00:09:09.383 { 00:09:09.383 "name": "BaseBdev1", 00:09:09.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.383 "is_configured": false, 00:09:09.383 "data_offset": 0, 00:09:09.383 "data_size": 0 00:09:09.383 }, 00:09:09.383 { 00:09:09.383 "name": "BaseBdev2", 00:09:09.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.383 "is_configured": false, 00:09:09.383 "data_offset": 0, 00:09:09.383 "data_size": 0 00:09:09.383 } 00:09:09.383 ] 00:09:09.383 }' 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.383 16:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 [2024-11-08 16:50:39.236246] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.952 [2024-11-08 16:50:39.236369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 [2024-11-08 16:50:39.248242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.952 [2024-11-08 16:50:39.248351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.952 [2024-11-08 16:50:39.248378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.952 [2024-11-08 16:50:39.248401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 [2024-11-08 16:50:39.269107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.952 BaseBdev1 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.952 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.953 [ 00:09:09.953 { 00:09:09.953 "name": "BaseBdev1", 00:09:09.953 "aliases": [ 00:09:09.953 "be4aa365-4068-49ef-b053-6c9f51100419" 00:09:09.953 ], 00:09:09.953 "product_name": "Malloc disk", 00:09:09.953 "block_size": 512, 00:09:09.953 "num_blocks": 65536, 00:09:09.953 "uuid": "be4aa365-4068-49ef-b053-6c9f51100419", 00:09:09.953 "assigned_rate_limits": { 00:09:09.953 "rw_ios_per_sec": 0, 00:09:09.953 "rw_mbytes_per_sec": 0, 00:09:09.953 "r_mbytes_per_sec": 0, 00:09:09.953 "w_mbytes_per_sec": 0 00:09:09.953 }, 00:09:09.953 "claimed": true, 00:09:09.953 "claim_type": "exclusive_write", 00:09:09.953 "zoned": false, 00:09:09.953 "supported_io_types": { 00:09:09.953 "read": true, 00:09:09.953 "write": true, 00:09:09.953 "unmap": true, 00:09:09.953 "flush": true, 00:09:09.953 "reset": true, 00:09:09.953 "nvme_admin": false, 00:09:09.953 "nvme_io": false, 00:09:09.953 "nvme_io_md": false, 00:09:09.953 "write_zeroes": true, 00:09:09.953 "zcopy": true, 00:09:09.953 "get_zone_info": false, 00:09:09.953 "zone_management": false, 00:09:09.953 "zone_append": false, 00:09:09.953 "compare": false, 00:09:09.953 "compare_and_write": false, 00:09:09.953 "abort": true, 00:09:09.953 "seek_hole": false, 00:09:09.953 "seek_data": false, 00:09:09.953 "copy": true, 00:09:09.953 "nvme_iov_md": false 00:09:09.953 }, 00:09:09.953 "memory_domains": [ 00:09:09.953 { 00:09:09.953 "dma_device_id": "system", 00:09:09.953 "dma_device_type": 1 00:09:09.953 }, 00:09:09.953 { 00:09:09.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.953 "dma_device_type": 2 00:09:09.953 } 00:09:09.953 ], 00:09:09.953 "driver_specific": {} 00:09:09.953 } 00:09:09.953 ] 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.953 "name": "Existed_Raid", 00:09:09.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.953 "strip_size_kb": 64, 00:09:09.953 "state": "configuring", 00:09:09.953 "raid_level": "concat", 00:09:09.953 "superblock": false, 00:09:09.953 "num_base_bdevs": 2, 00:09:09.953 "num_base_bdevs_discovered": 1, 00:09:09.953 "num_base_bdevs_operational": 2, 00:09:09.953 "base_bdevs_list": [ 00:09:09.953 { 00:09:09.953 "name": "BaseBdev1", 00:09:09.953 "uuid": "be4aa365-4068-49ef-b053-6c9f51100419", 00:09:09.953 "is_configured": true, 00:09:09.953 "data_offset": 0, 00:09:09.953 "data_size": 65536 00:09:09.953 }, 00:09:09.953 { 00:09:09.953 "name": "BaseBdev2", 00:09:09.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.953 "is_configured": false, 00:09:09.953 "data_offset": 0, 00:09:09.953 "data_size": 0 00:09:09.953 } 00:09:09.953 ] 00:09:09.953 }' 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.953 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.215 [2024-11-08 16:50:39.696478] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.215 [2024-11-08 16:50:39.696611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.215 [2024-11-08 16:50:39.708491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.215 [2024-11-08 16:50:39.710412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.215 [2024-11-08 16:50:39.710485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.215 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.476 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.476 "name": "Existed_Raid", 00:09:10.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.476 "strip_size_kb": 64, 00:09:10.476 "state": "configuring", 00:09:10.476 "raid_level": "concat", 00:09:10.476 "superblock": false, 00:09:10.476 "num_base_bdevs": 2, 00:09:10.476 "num_base_bdevs_discovered": 1, 00:09:10.476 "num_base_bdevs_operational": 2, 00:09:10.476 "base_bdevs_list": [ 00:09:10.476 { 00:09:10.476 "name": "BaseBdev1", 00:09:10.476 "uuid": "be4aa365-4068-49ef-b053-6c9f51100419", 00:09:10.476 "is_configured": true, 00:09:10.476 "data_offset": 0, 00:09:10.476 "data_size": 65536 00:09:10.476 }, 00:09:10.476 { 00:09:10.476 "name": "BaseBdev2", 00:09:10.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.476 "is_configured": false, 00:09:10.476 "data_offset": 0, 00:09:10.476 "data_size": 0 00:09:10.476 } 00:09:10.476 ] 00:09:10.476 }' 00:09:10.477 16:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.477 16:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.737 [2024-11-08 16:50:40.180417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.737 [2024-11-08 16:50:40.180561] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:10.737 [2024-11-08 16:50:40.180600] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:10.737 [2024-11-08 16:50:40.181044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:10.737 [2024-11-08 16:50:40.181293] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:10.737 [2024-11-08 16:50:40.181361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:10.737 [2024-11-08 16:50:40.181731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.737 BaseBdev2 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.737 [ 00:09:10.737 { 00:09:10.737 "name": "BaseBdev2", 00:09:10.737 "aliases": [ 00:09:10.737 "d442c8b5-a4d4-48d2-b761-cc4e9058f816" 00:09:10.737 ], 00:09:10.737 "product_name": "Malloc disk", 00:09:10.737 "block_size": 512, 00:09:10.737 "num_blocks": 65536, 00:09:10.737 "uuid": "d442c8b5-a4d4-48d2-b761-cc4e9058f816", 00:09:10.737 "assigned_rate_limits": { 00:09:10.737 "rw_ios_per_sec": 0, 00:09:10.737 "rw_mbytes_per_sec": 0, 00:09:10.737 "r_mbytes_per_sec": 0, 00:09:10.737 "w_mbytes_per_sec": 0 00:09:10.737 }, 00:09:10.737 "claimed": true, 00:09:10.737 "claim_type": "exclusive_write", 00:09:10.737 "zoned": false, 00:09:10.737 "supported_io_types": { 00:09:10.737 "read": true, 00:09:10.737 "write": true, 00:09:10.737 "unmap": true, 00:09:10.737 "flush": true, 00:09:10.737 "reset": true, 00:09:10.737 "nvme_admin": false, 00:09:10.737 "nvme_io": false, 00:09:10.737 "nvme_io_md": false, 00:09:10.737 "write_zeroes": true, 00:09:10.737 "zcopy": true, 00:09:10.737 "get_zone_info": false, 00:09:10.737 "zone_management": false, 00:09:10.737 "zone_append": false, 00:09:10.737 "compare": false, 00:09:10.737 "compare_and_write": false, 00:09:10.737 "abort": true, 00:09:10.737 "seek_hole": false, 00:09:10.737 "seek_data": false, 00:09:10.737 "copy": true, 00:09:10.737 "nvme_iov_md": false 00:09:10.737 }, 00:09:10.737 "memory_domains": [ 00:09:10.737 { 00:09:10.737 "dma_device_id": "system", 00:09:10.737 "dma_device_type": 1 00:09:10.737 }, 00:09:10.737 { 00:09:10.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.737 "dma_device_type": 2 00:09:10.737 } 00:09:10.737 ], 00:09:10.737 "driver_specific": {} 00:09:10.737 } 00:09:10.737 ] 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.737 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.997 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.997 "name": "Existed_Raid", 00:09:10.997 "uuid": "04f8ec0a-9771-4c83-8004-bbb4068c940b", 00:09:10.997 "strip_size_kb": 64, 00:09:10.997 "state": "online", 00:09:10.997 "raid_level": "concat", 00:09:10.997 "superblock": false, 00:09:10.997 "num_base_bdevs": 2, 00:09:10.997 "num_base_bdevs_discovered": 2, 00:09:10.997 "num_base_bdevs_operational": 2, 00:09:10.997 "base_bdevs_list": [ 00:09:10.997 { 00:09:10.997 "name": "BaseBdev1", 00:09:10.997 "uuid": "be4aa365-4068-49ef-b053-6c9f51100419", 00:09:10.997 "is_configured": true, 00:09:10.997 "data_offset": 0, 00:09:10.997 "data_size": 65536 00:09:10.997 }, 00:09:10.997 { 00:09:10.997 "name": "BaseBdev2", 00:09:10.997 "uuid": "d442c8b5-a4d4-48d2-b761-cc4e9058f816", 00:09:10.997 "is_configured": true, 00:09:10.997 "data_offset": 0, 00:09:10.997 "data_size": 65536 00:09:10.997 } 00:09:10.997 ] 00:09:10.997 }' 00:09:10.997 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.997 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.257 [2024-11-08 16:50:40.612036] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.257 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.257 "name": "Existed_Raid", 00:09:11.257 "aliases": [ 00:09:11.257 "04f8ec0a-9771-4c83-8004-bbb4068c940b" 00:09:11.257 ], 00:09:11.257 "product_name": "Raid Volume", 00:09:11.257 "block_size": 512, 00:09:11.257 "num_blocks": 131072, 00:09:11.257 "uuid": "04f8ec0a-9771-4c83-8004-bbb4068c940b", 00:09:11.257 "assigned_rate_limits": { 00:09:11.257 "rw_ios_per_sec": 0, 00:09:11.257 "rw_mbytes_per_sec": 0, 00:09:11.257 "r_mbytes_per_sec": 0, 00:09:11.257 "w_mbytes_per_sec": 0 00:09:11.257 }, 00:09:11.257 "claimed": false, 00:09:11.257 "zoned": false, 00:09:11.257 "supported_io_types": { 00:09:11.257 "read": true, 00:09:11.257 "write": true, 00:09:11.257 "unmap": true, 00:09:11.257 "flush": true, 00:09:11.257 "reset": true, 00:09:11.257 "nvme_admin": false, 00:09:11.257 "nvme_io": false, 00:09:11.257 "nvme_io_md": false, 00:09:11.257 "write_zeroes": true, 00:09:11.257 "zcopy": false, 00:09:11.257 "get_zone_info": false, 00:09:11.257 "zone_management": false, 00:09:11.257 "zone_append": false, 00:09:11.257 "compare": false, 00:09:11.257 "compare_and_write": false, 00:09:11.257 "abort": false, 00:09:11.257 "seek_hole": false, 00:09:11.257 "seek_data": false, 00:09:11.257 "copy": false, 00:09:11.257 "nvme_iov_md": false 00:09:11.257 }, 00:09:11.257 "memory_domains": [ 00:09:11.257 { 00:09:11.257 "dma_device_id": "system", 00:09:11.257 "dma_device_type": 1 00:09:11.257 }, 00:09:11.257 { 00:09:11.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.257 "dma_device_type": 2 00:09:11.257 }, 00:09:11.257 { 00:09:11.257 "dma_device_id": "system", 00:09:11.257 "dma_device_type": 1 00:09:11.257 }, 00:09:11.257 { 00:09:11.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.257 "dma_device_type": 2 00:09:11.257 } 00:09:11.257 ], 00:09:11.257 "driver_specific": { 00:09:11.258 "raid": { 00:09:11.258 "uuid": "04f8ec0a-9771-4c83-8004-bbb4068c940b", 00:09:11.258 "strip_size_kb": 64, 00:09:11.258 "state": "online", 00:09:11.258 "raid_level": "concat", 00:09:11.258 "superblock": false, 00:09:11.258 "num_base_bdevs": 2, 00:09:11.258 "num_base_bdevs_discovered": 2, 00:09:11.258 "num_base_bdevs_operational": 2, 00:09:11.258 "base_bdevs_list": [ 00:09:11.258 { 00:09:11.258 "name": "BaseBdev1", 00:09:11.258 "uuid": "be4aa365-4068-49ef-b053-6c9f51100419", 00:09:11.258 "is_configured": true, 00:09:11.258 "data_offset": 0, 00:09:11.258 "data_size": 65536 00:09:11.258 }, 00:09:11.258 { 00:09:11.258 "name": "BaseBdev2", 00:09:11.258 "uuid": "d442c8b5-a4d4-48d2-b761-cc4e9058f816", 00:09:11.258 "is_configured": true, 00:09:11.258 "data_offset": 0, 00:09:11.258 "data_size": 65536 00:09:11.258 } 00:09:11.258 ] 00:09:11.258 } 00:09:11.258 } 00:09:11.258 }' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.258 BaseBdev2' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.258 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.518 [2024-11-08 16:50:40.839396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.518 [2024-11-08 16:50:40.839481] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.518 [2024-11-08 16:50:40.839557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.518 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.519 "name": "Existed_Raid", 00:09:11.519 "uuid": "04f8ec0a-9771-4c83-8004-bbb4068c940b", 00:09:11.519 "strip_size_kb": 64, 00:09:11.519 "state": "offline", 00:09:11.519 "raid_level": "concat", 00:09:11.519 "superblock": false, 00:09:11.519 "num_base_bdevs": 2, 00:09:11.519 "num_base_bdevs_discovered": 1, 00:09:11.519 "num_base_bdevs_operational": 1, 00:09:11.519 "base_bdevs_list": [ 00:09:11.519 { 00:09:11.519 "name": null, 00:09:11.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.519 "is_configured": false, 00:09:11.519 "data_offset": 0, 00:09:11.519 "data_size": 65536 00:09:11.519 }, 00:09:11.519 { 00:09:11.519 "name": "BaseBdev2", 00:09:11.519 "uuid": "d442c8b5-a4d4-48d2-b761-cc4e9058f816", 00:09:11.519 "is_configured": true, 00:09:11.519 "data_offset": 0, 00:09:11.519 "data_size": 65536 00:09:11.519 } 00:09:11.519 ] 00:09:11.519 }' 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.519 16:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.779 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.039 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.040 [2024-11-08 16:50:41.337701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.040 [2024-11-08 16:50:41.337808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73048 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73048 ']' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73048 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73048 00:09:12.040 killing process with pid 73048 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73048' 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73048 00:09:12.040 [2024-11-08 16:50:41.451532] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.040 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73048 00:09:12.040 [2024-11-08 16:50:41.452551] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.300 00:09:12.300 real 0m3.789s 00:09:12.300 user 0m5.919s 00:09:12.300 sys 0m0.759s 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.300 ************************************ 00:09:12.300 END TEST raid_state_function_test 00:09:12.300 ************************************ 00:09:12.300 16:50:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:12.300 16:50:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.300 16:50:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.300 16:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.300 ************************************ 00:09:12.300 START TEST raid_state_function_test_sb 00:09:12.300 ************************************ 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73290 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73290' 00:09:12.300 Process raid pid: 73290 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73290 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73290 ']' 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.300 16:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.560 [2024-11-08 16:50:41.853479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:12.560 [2024-11-08 16:50:41.853709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.560 [2024-11-08 16:50:42.014984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.560 [2024-11-08 16:50:42.058942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.820 [2024-11-08 16:50:42.101040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.820 [2024-11-08 16:50:42.101162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.388 [2024-11-08 16:50:42.694344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.388 [2024-11-08 16:50:42.694481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.388 [2024-11-08 16:50:42.694514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.388 [2024-11-08 16:50:42.694538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.388 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.389 "name": "Existed_Raid", 00:09:13.389 "uuid": "f074c986-8cf6-4f52-9696-f6682230d8fd", 00:09:13.389 "strip_size_kb": 64, 00:09:13.389 "state": "configuring", 00:09:13.389 "raid_level": "concat", 00:09:13.389 "superblock": true, 00:09:13.389 "num_base_bdevs": 2, 00:09:13.389 "num_base_bdevs_discovered": 0, 00:09:13.389 "num_base_bdevs_operational": 2, 00:09:13.389 "base_bdevs_list": [ 00:09:13.389 { 00:09:13.389 "name": "BaseBdev1", 00:09:13.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.389 "is_configured": false, 00:09:13.389 "data_offset": 0, 00:09:13.389 "data_size": 0 00:09:13.389 }, 00:09:13.389 { 00:09:13.389 "name": "BaseBdev2", 00:09:13.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.389 "is_configured": false, 00:09:13.389 "data_offset": 0, 00:09:13.389 "data_size": 0 00:09:13.389 } 00:09:13.389 ] 00:09:13.389 }' 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.389 16:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.648 [2024-11-08 16:50:43.145475] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.648 [2024-11-08 16:50:43.145592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.648 [2024-11-08 16:50:43.157497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.648 [2024-11-08 16:50:43.157586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.648 [2024-11-08 16:50:43.157598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.648 [2024-11-08 16:50:43.157607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.648 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.908 [2024-11-08 16:50:43.178353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.908 BaseBdev1 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.908 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.908 [ 00:09:13.908 { 00:09:13.908 "name": "BaseBdev1", 00:09:13.908 "aliases": [ 00:09:13.908 "6c0a8015-7caf-486f-a88d-92f204b7dd36" 00:09:13.908 ], 00:09:13.908 "product_name": "Malloc disk", 00:09:13.908 "block_size": 512, 00:09:13.908 "num_blocks": 65536, 00:09:13.908 "uuid": "6c0a8015-7caf-486f-a88d-92f204b7dd36", 00:09:13.908 "assigned_rate_limits": { 00:09:13.908 "rw_ios_per_sec": 0, 00:09:13.908 "rw_mbytes_per_sec": 0, 00:09:13.908 "r_mbytes_per_sec": 0, 00:09:13.908 "w_mbytes_per_sec": 0 00:09:13.908 }, 00:09:13.908 "claimed": true, 00:09:13.908 "claim_type": "exclusive_write", 00:09:13.908 "zoned": false, 00:09:13.908 "supported_io_types": { 00:09:13.908 "read": true, 00:09:13.908 "write": true, 00:09:13.908 "unmap": true, 00:09:13.908 "flush": true, 00:09:13.908 "reset": true, 00:09:13.908 "nvme_admin": false, 00:09:13.908 "nvme_io": false, 00:09:13.908 "nvme_io_md": false, 00:09:13.908 "write_zeroes": true, 00:09:13.908 "zcopy": true, 00:09:13.908 "get_zone_info": false, 00:09:13.908 "zone_management": false, 00:09:13.908 "zone_append": false, 00:09:13.908 "compare": false, 00:09:13.908 "compare_and_write": false, 00:09:13.908 "abort": true, 00:09:13.908 "seek_hole": false, 00:09:13.909 "seek_data": false, 00:09:13.909 "copy": true, 00:09:13.909 "nvme_iov_md": false 00:09:13.909 }, 00:09:13.909 "memory_domains": [ 00:09:13.909 { 00:09:13.909 "dma_device_id": "system", 00:09:13.909 "dma_device_type": 1 00:09:13.909 }, 00:09:13.909 { 00:09:13.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.909 "dma_device_type": 2 00:09:13.909 } 00:09:13.909 ], 00:09:13.909 "driver_specific": {} 00:09:13.909 } 00:09:13.909 ] 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.909 "name": "Existed_Raid", 00:09:13.909 "uuid": "0f14b751-5092-4b2a-a830-866cc103a029", 00:09:13.909 "strip_size_kb": 64, 00:09:13.909 "state": "configuring", 00:09:13.909 "raid_level": "concat", 00:09:13.909 "superblock": true, 00:09:13.909 "num_base_bdevs": 2, 00:09:13.909 "num_base_bdevs_discovered": 1, 00:09:13.909 "num_base_bdevs_operational": 2, 00:09:13.909 "base_bdevs_list": [ 00:09:13.909 { 00:09:13.909 "name": "BaseBdev1", 00:09:13.909 "uuid": "6c0a8015-7caf-486f-a88d-92f204b7dd36", 00:09:13.909 "is_configured": true, 00:09:13.909 "data_offset": 2048, 00:09:13.909 "data_size": 63488 00:09:13.909 }, 00:09:13.909 { 00:09:13.909 "name": "BaseBdev2", 00:09:13.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.909 "is_configured": false, 00:09:13.909 "data_offset": 0, 00:09:13.909 "data_size": 0 00:09:13.909 } 00:09:13.909 ] 00:09:13.909 }' 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.909 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 [2024-11-08 16:50:43.665573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.168 [2024-11-08 16:50:43.665703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 [2024-11-08 16:50:43.677574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.168 [2024-11-08 16:50:43.679536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.168 [2024-11-08 16:50:43.679619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:14.168 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.169 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.428 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.428 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.428 "name": "Existed_Raid", 00:09:14.428 "uuid": "b1d08864-464b-48ec-843e-7c51f24e6fa3", 00:09:14.428 "strip_size_kb": 64, 00:09:14.428 "state": "configuring", 00:09:14.428 "raid_level": "concat", 00:09:14.428 "superblock": true, 00:09:14.428 "num_base_bdevs": 2, 00:09:14.428 "num_base_bdevs_discovered": 1, 00:09:14.428 "num_base_bdevs_operational": 2, 00:09:14.428 "base_bdevs_list": [ 00:09:14.428 { 00:09:14.428 "name": "BaseBdev1", 00:09:14.428 "uuid": "6c0a8015-7caf-486f-a88d-92f204b7dd36", 00:09:14.428 "is_configured": true, 00:09:14.428 "data_offset": 2048, 00:09:14.428 "data_size": 63488 00:09:14.428 }, 00:09:14.428 { 00:09:14.428 "name": "BaseBdev2", 00:09:14.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.428 "is_configured": false, 00:09:14.428 "data_offset": 0, 00:09:14.428 "data_size": 0 00:09:14.428 } 00:09:14.428 ] 00:09:14.428 }' 00:09:14.428 16:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.428 16:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 [2024-11-08 16:50:44.114967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.688 [2024-11-08 16:50:44.115271] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:14.688 [2024-11-08 16:50:44.115336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.688 [2024-11-08 16:50:44.115659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:14.688 [2024-11-08 16:50:44.115836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:14.688 [2024-11-08 16:50:44.115886] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:14.688 BaseBdev2 00:09:14.688 [2024-11-08 16:50:44.116034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 [ 00:09:14.688 { 00:09:14.688 "name": "BaseBdev2", 00:09:14.688 "aliases": [ 00:09:14.688 "102bb6ec-c1c3-4e3b-900d-0c8dc7083e59" 00:09:14.688 ], 00:09:14.688 "product_name": "Malloc disk", 00:09:14.688 "block_size": 512, 00:09:14.688 "num_blocks": 65536, 00:09:14.688 "uuid": "102bb6ec-c1c3-4e3b-900d-0c8dc7083e59", 00:09:14.688 "assigned_rate_limits": { 00:09:14.688 "rw_ios_per_sec": 0, 00:09:14.688 "rw_mbytes_per_sec": 0, 00:09:14.688 "r_mbytes_per_sec": 0, 00:09:14.688 "w_mbytes_per_sec": 0 00:09:14.688 }, 00:09:14.688 "claimed": true, 00:09:14.688 "claim_type": "exclusive_write", 00:09:14.688 "zoned": false, 00:09:14.688 "supported_io_types": { 00:09:14.688 "read": true, 00:09:14.688 "write": true, 00:09:14.688 "unmap": true, 00:09:14.688 "flush": true, 00:09:14.688 "reset": true, 00:09:14.688 "nvme_admin": false, 00:09:14.688 "nvme_io": false, 00:09:14.688 "nvme_io_md": false, 00:09:14.688 "write_zeroes": true, 00:09:14.688 "zcopy": true, 00:09:14.688 "get_zone_info": false, 00:09:14.688 "zone_management": false, 00:09:14.688 "zone_append": false, 00:09:14.688 "compare": false, 00:09:14.688 "compare_and_write": false, 00:09:14.688 "abort": true, 00:09:14.688 "seek_hole": false, 00:09:14.688 "seek_data": false, 00:09:14.688 "copy": true, 00:09:14.688 "nvme_iov_md": false 00:09:14.688 }, 00:09:14.688 "memory_domains": [ 00:09:14.688 { 00:09:14.688 "dma_device_id": "system", 00:09:14.688 "dma_device_type": 1 00:09:14.688 }, 00:09:14.688 { 00:09:14.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.688 "dma_device_type": 2 00:09:14.688 } 00:09:14.688 ], 00:09:14.688 "driver_specific": {} 00:09:14.688 } 00:09:14.688 ] 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.688 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.689 "name": "Existed_Raid", 00:09:14.689 "uuid": "b1d08864-464b-48ec-843e-7c51f24e6fa3", 00:09:14.689 "strip_size_kb": 64, 00:09:14.689 "state": "online", 00:09:14.689 "raid_level": "concat", 00:09:14.689 "superblock": true, 00:09:14.689 "num_base_bdevs": 2, 00:09:14.689 "num_base_bdevs_discovered": 2, 00:09:14.689 "num_base_bdevs_operational": 2, 00:09:14.689 "base_bdevs_list": [ 00:09:14.689 { 00:09:14.689 "name": "BaseBdev1", 00:09:14.689 "uuid": "6c0a8015-7caf-486f-a88d-92f204b7dd36", 00:09:14.689 "is_configured": true, 00:09:14.689 "data_offset": 2048, 00:09:14.689 "data_size": 63488 00:09:14.689 }, 00:09:14.689 { 00:09:14.689 "name": "BaseBdev2", 00:09:14.689 "uuid": "102bb6ec-c1c3-4e3b-900d-0c8dc7083e59", 00:09:14.689 "is_configured": true, 00:09:14.689 "data_offset": 2048, 00:09:14.689 "data_size": 63488 00:09:14.689 } 00:09:14.689 ] 00:09:14.689 }' 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.689 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 [2024-11-08 16:50:44.578503] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.259 "name": "Existed_Raid", 00:09:15.259 "aliases": [ 00:09:15.259 "b1d08864-464b-48ec-843e-7c51f24e6fa3" 00:09:15.259 ], 00:09:15.259 "product_name": "Raid Volume", 00:09:15.259 "block_size": 512, 00:09:15.259 "num_blocks": 126976, 00:09:15.259 "uuid": "b1d08864-464b-48ec-843e-7c51f24e6fa3", 00:09:15.259 "assigned_rate_limits": { 00:09:15.259 "rw_ios_per_sec": 0, 00:09:15.259 "rw_mbytes_per_sec": 0, 00:09:15.259 "r_mbytes_per_sec": 0, 00:09:15.259 "w_mbytes_per_sec": 0 00:09:15.259 }, 00:09:15.259 "claimed": false, 00:09:15.259 "zoned": false, 00:09:15.259 "supported_io_types": { 00:09:15.259 "read": true, 00:09:15.259 "write": true, 00:09:15.259 "unmap": true, 00:09:15.259 "flush": true, 00:09:15.259 "reset": true, 00:09:15.259 "nvme_admin": false, 00:09:15.259 "nvme_io": false, 00:09:15.259 "nvme_io_md": false, 00:09:15.259 "write_zeroes": true, 00:09:15.259 "zcopy": false, 00:09:15.259 "get_zone_info": false, 00:09:15.259 "zone_management": false, 00:09:15.259 "zone_append": false, 00:09:15.259 "compare": false, 00:09:15.259 "compare_and_write": false, 00:09:15.259 "abort": false, 00:09:15.259 "seek_hole": false, 00:09:15.259 "seek_data": false, 00:09:15.259 "copy": false, 00:09:15.259 "nvme_iov_md": false 00:09:15.259 }, 00:09:15.259 "memory_domains": [ 00:09:15.259 { 00:09:15.259 "dma_device_id": "system", 00:09:15.259 "dma_device_type": 1 00:09:15.259 }, 00:09:15.259 { 00:09:15.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.259 "dma_device_type": 2 00:09:15.259 }, 00:09:15.259 { 00:09:15.259 "dma_device_id": "system", 00:09:15.259 "dma_device_type": 1 00:09:15.259 }, 00:09:15.259 { 00:09:15.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.259 "dma_device_type": 2 00:09:15.259 } 00:09:15.259 ], 00:09:15.259 "driver_specific": { 00:09:15.259 "raid": { 00:09:15.259 "uuid": "b1d08864-464b-48ec-843e-7c51f24e6fa3", 00:09:15.259 "strip_size_kb": 64, 00:09:15.259 "state": "online", 00:09:15.259 "raid_level": "concat", 00:09:15.259 "superblock": true, 00:09:15.259 "num_base_bdevs": 2, 00:09:15.259 "num_base_bdevs_discovered": 2, 00:09:15.259 "num_base_bdevs_operational": 2, 00:09:15.259 "base_bdevs_list": [ 00:09:15.259 { 00:09:15.259 "name": "BaseBdev1", 00:09:15.259 "uuid": "6c0a8015-7caf-486f-a88d-92f204b7dd36", 00:09:15.259 "is_configured": true, 00:09:15.259 "data_offset": 2048, 00:09:15.259 "data_size": 63488 00:09:15.259 }, 00:09:15.259 { 00:09:15.259 "name": "BaseBdev2", 00:09:15.259 "uuid": "102bb6ec-c1c3-4e3b-900d-0c8dc7083e59", 00:09:15.259 "is_configured": true, 00:09:15.259 "data_offset": 2048, 00:09:15.259 "data_size": 63488 00:09:15.259 } 00:09:15.259 ] 00:09:15.259 } 00:09:15.259 } 00:09:15.259 }' 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:15.259 BaseBdev2' 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.259 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.260 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.260 [2024-11-08 16:50:44.773942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.260 [2024-11-08 16:50:44.774021] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.260 [2024-11-08 16:50:44.774097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.522 "name": "Existed_Raid", 00:09:15.522 "uuid": "b1d08864-464b-48ec-843e-7c51f24e6fa3", 00:09:15.522 "strip_size_kb": 64, 00:09:15.522 "state": "offline", 00:09:15.522 "raid_level": "concat", 00:09:15.522 "superblock": true, 00:09:15.522 "num_base_bdevs": 2, 00:09:15.522 "num_base_bdevs_discovered": 1, 00:09:15.522 "num_base_bdevs_operational": 1, 00:09:15.522 "base_bdevs_list": [ 00:09:15.522 { 00:09:15.522 "name": null, 00:09:15.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.522 "is_configured": false, 00:09:15.522 "data_offset": 0, 00:09:15.522 "data_size": 63488 00:09:15.522 }, 00:09:15.522 { 00:09:15.522 "name": "BaseBdev2", 00:09:15.522 "uuid": "102bb6ec-c1c3-4e3b-900d-0c8dc7083e59", 00:09:15.522 "is_configured": true, 00:09:15.522 "data_offset": 2048, 00:09:15.522 "data_size": 63488 00:09:15.522 } 00:09:15.522 ] 00:09:15.522 }' 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.522 16:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.786 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.786 [2024-11-08 16:50:45.248627] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.787 [2024-11-08 16:50:45.248741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:15.787 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73290 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73290 ']' 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73290 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73290 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73290' 00:09:16.046 killing process with pid 73290 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73290 00:09:16.046 [2024-11-08 16:50:45.357743] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.046 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73290 00:09:16.046 [2024-11-08 16:50:45.358784] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.305 16:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.305 00:09:16.305 real 0m3.844s 00:09:16.305 user 0m6.019s 00:09:16.305 sys 0m0.793s 00:09:16.305 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.305 16:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.305 ************************************ 00:09:16.305 END TEST raid_state_function_test_sb 00:09:16.305 ************************************ 00:09:16.305 16:50:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:16.305 16:50:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:16.305 16:50:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.305 16:50:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.305 ************************************ 00:09:16.305 START TEST raid_superblock_test 00:09:16.305 ************************************ 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73530 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73530 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73530 ']' 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.305 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.306 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.306 16:50:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.306 [2024-11-08 16:50:45.775123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:16.306 [2024-11-08 16:50:45.775357] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73530 ] 00:09:16.565 [2024-11-08 16:50:45.936282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.565 [2024-11-08 16:50:45.981208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.565 [2024-11-08 16:50:46.023405] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.565 [2024-11-08 16:50:46.023449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.136 malloc1 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.136 [2024-11-08 16:50:46.601392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.136 [2024-11-08 16:50:46.601530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.136 [2024-11-08 16:50:46.601580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.136 [2024-11-08 16:50:46.601622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.136 [2024-11-08 16:50:46.603757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.136 [2024-11-08 16:50:46.603836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.136 pt1 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.136 malloc2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.136 [2024-11-08 16:50:46.638489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.136 [2024-11-08 16:50:46.638598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.136 [2024-11-08 16:50:46.638633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:17.136 [2024-11-08 16:50:46.638686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.136 [2024-11-08 16:50:46.640814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.136 [2024-11-08 16:50:46.640885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.136 pt2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.136 [2024-11-08 16:50:46.650532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.136 [2024-11-08 16:50:46.652382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.136 [2024-11-08 16:50:46.652557] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:17.136 [2024-11-08 16:50:46.652607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:17.136 [2024-11-08 16:50:46.652883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:17.136 [2024-11-08 16:50:46.653047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:17.136 [2024-11-08 16:50:46.653089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:17.136 [2024-11-08 16:50:46.653231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.136 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.137 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.397 "name": "raid_bdev1", 00:09:17.397 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:17.397 "strip_size_kb": 64, 00:09:17.397 "state": "online", 00:09:17.397 "raid_level": "concat", 00:09:17.397 "superblock": true, 00:09:17.397 "num_base_bdevs": 2, 00:09:17.397 "num_base_bdevs_discovered": 2, 00:09:17.397 "num_base_bdevs_operational": 2, 00:09:17.397 "base_bdevs_list": [ 00:09:17.397 { 00:09:17.397 "name": "pt1", 00:09:17.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.397 "is_configured": true, 00:09:17.397 "data_offset": 2048, 00:09:17.397 "data_size": 63488 00:09:17.397 }, 00:09:17.397 { 00:09:17.397 "name": "pt2", 00:09:17.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.397 "is_configured": true, 00:09:17.397 "data_offset": 2048, 00:09:17.397 "data_size": 63488 00:09:17.397 } 00:09:17.397 ] 00:09:17.397 }' 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.397 16:50:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.657 [2024-11-08 16:50:47.078061] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.657 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.657 "name": "raid_bdev1", 00:09:17.657 "aliases": [ 00:09:17.657 "16099626-a3db-4808-899e-b61e3cac76e5" 00:09:17.657 ], 00:09:17.657 "product_name": "Raid Volume", 00:09:17.657 "block_size": 512, 00:09:17.657 "num_blocks": 126976, 00:09:17.657 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:17.657 "assigned_rate_limits": { 00:09:17.657 "rw_ios_per_sec": 0, 00:09:17.657 "rw_mbytes_per_sec": 0, 00:09:17.657 "r_mbytes_per_sec": 0, 00:09:17.657 "w_mbytes_per_sec": 0 00:09:17.657 }, 00:09:17.657 "claimed": false, 00:09:17.657 "zoned": false, 00:09:17.657 "supported_io_types": { 00:09:17.657 "read": true, 00:09:17.657 "write": true, 00:09:17.657 "unmap": true, 00:09:17.657 "flush": true, 00:09:17.657 "reset": true, 00:09:17.657 "nvme_admin": false, 00:09:17.657 "nvme_io": false, 00:09:17.657 "nvme_io_md": false, 00:09:17.657 "write_zeroes": true, 00:09:17.657 "zcopy": false, 00:09:17.657 "get_zone_info": false, 00:09:17.657 "zone_management": false, 00:09:17.657 "zone_append": false, 00:09:17.657 "compare": false, 00:09:17.657 "compare_and_write": false, 00:09:17.657 "abort": false, 00:09:17.657 "seek_hole": false, 00:09:17.657 "seek_data": false, 00:09:17.657 "copy": false, 00:09:17.657 "nvme_iov_md": false 00:09:17.657 }, 00:09:17.657 "memory_domains": [ 00:09:17.657 { 00:09:17.657 "dma_device_id": "system", 00:09:17.657 "dma_device_type": 1 00:09:17.657 }, 00:09:17.657 { 00:09:17.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.657 "dma_device_type": 2 00:09:17.657 }, 00:09:17.657 { 00:09:17.658 "dma_device_id": "system", 00:09:17.658 "dma_device_type": 1 00:09:17.658 }, 00:09:17.658 { 00:09:17.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.658 "dma_device_type": 2 00:09:17.658 } 00:09:17.658 ], 00:09:17.658 "driver_specific": { 00:09:17.658 "raid": { 00:09:17.658 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:17.658 "strip_size_kb": 64, 00:09:17.658 "state": "online", 00:09:17.658 "raid_level": "concat", 00:09:17.658 "superblock": true, 00:09:17.658 "num_base_bdevs": 2, 00:09:17.658 "num_base_bdevs_discovered": 2, 00:09:17.658 "num_base_bdevs_operational": 2, 00:09:17.658 "base_bdevs_list": [ 00:09:17.658 { 00:09:17.658 "name": "pt1", 00:09:17.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.658 "is_configured": true, 00:09:17.658 "data_offset": 2048, 00:09:17.658 "data_size": 63488 00:09:17.658 }, 00:09:17.658 { 00:09:17.658 "name": "pt2", 00:09:17.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.658 "is_configured": true, 00:09:17.658 "data_offset": 2048, 00:09:17.658 "data_size": 63488 00:09:17.658 } 00:09:17.658 ] 00:09:17.658 } 00:09:17.658 } 00:09:17.658 }' 00:09:17.658 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.658 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.658 pt2' 00:09:17.658 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 [2024-11-08 16:50:47.297615] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=16099626-a3db-4808-899e-b61e3cac76e5 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 16099626-a3db-4808-899e-b61e3cac76e5 ']' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 [2024-11-08 16:50:47.329335] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.918 [2024-11-08 16:50:47.329401] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.918 [2024-11-08 16:50:47.329485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.918 [2024-11-08 16:50:47.329566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.918 [2024-11-08 16:50:47.329621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.918 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.178 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:18.178 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:18.178 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:18.178 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.179 [2024-11-08 16:50:47.465150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:18.179 [2024-11-08 16:50:47.467096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:18.179 [2024-11-08 16:50:47.467215] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:18.179 [2024-11-08 16:50:47.467301] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:18.179 [2024-11-08 16:50:47.467350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.179 [2024-11-08 16:50:47.467403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:18.179 request: 00:09:18.179 { 00:09:18.179 "name": "raid_bdev1", 00:09:18.179 "raid_level": "concat", 00:09:18.179 "base_bdevs": [ 00:09:18.179 "malloc1", 00:09:18.179 "malloc2" 00:09:18.179 ], 00:09:18.179 "strip_size_kb": 64, 00:09:18.179 "superblock": false, 00:09:18.179 "method": "bdev_raid_create", 00:09:18.179 "req_id": 1 00:09:18.179 } 00:09:18.179 Got JSON-RPC error response 00:09:18.179 response: 00:09:18.179 { 00:09:18.179 "code": -17, 00:09:18.179 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:18.179 } 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.179 [2024-11-08 16:50:47.529003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.179 [2024-11-08 16:50:47.529048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.179 [2024-11-08 16:50:47.529066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:18.179 [2024-11-08 16:50:47.529076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.179 [2024-11-08 16:50:47.531106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.179 [2024-11-08 16:50:47.531141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.179 [2024-11-08 16:50:47.531208] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:18.179 [2024-11-08 16:50:47.531245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.179 pt1 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.179 "name": "raid_bdev1", 00:09:18.179 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:18.179 "strip_size_kb": 64, 00:09:18.179 "state": "configuring", 00:09:18.179 "raid_level": "concat", 00:09:18.179 "superblock": true, 00:09:18.179 "num_base_bdevs": 2, 00:09:18.179 "num_base_bdevs_discovered": 1, 00:09:18.179 "num_base_bdevs_operational": 2, 00:09:18.179 "base_bdevs_list": [ 00:09:18.179 { 00:09:18.179 "name": "pt1", 00:09:18.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.179 "is_configured": true, 00:09:18.179 "data_offset": 2048, 00:09:18.179 "data_size": 63488 00:09:18.179 }, 00:09:18.179 { 00:09:18.179 "name": null, 00:09:18.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.179 "is_configured": false, 00:09:18.179 "data_offset": 2048, 00:09:18.179 "data_size": 63488 00:09:18.179 } 00:09:18.179 ] 00:09:18.179 }' 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.179 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.439 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:18.439 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:18.439 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.439 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.439 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.439 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.439 [2024-11-08 16:50:47.964361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.439 [2024-11-08 16:50:47.964481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.439 [2024-11-08 16:50:47.964531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:18.439 [2024-11-08 16:50:47.964566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.439 [2024-11-08 16:50:47.965056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.439 [2024-11-08 16:50:47.965121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.439 [2024-11-08 16:50:47.965236] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.439 [2024-11-08 16:50:47.965292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.439 [2024-11-08 16:50:47.965438] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:18.439 [2024-11-08 16:50:47.965484] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.439 [2024-11-08 16:50:47.965773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:18.439 [2024-11-08 16:50:47.965935] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:18.439 [2024-11-08 16:50:47.965987] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:18.439 [2024-11-08 16:50:47.966138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.699 pt2 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.699 16:50:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.699 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.699 "name": "raid_bdev1", 00:09:18.699 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:18.699 "strip_size_kb": 64, 00:09:18.699 "state": "online", 00:09:18.699 "raid_level": "concat", 00:09:18.699 "superblock": true, 00:09:18.699 "num_base_bdevs": 2, 00:09:18.699 "num_base_bdevs_discovered": 2, 00:09:18.699 "num_base_bdevs_operational": 2, 00:09:18.699 "base_bdevs_list": [ 00:09:18.699 { 00:09:18.699 "name": "pt1", 00:09:18.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.699 "is_configured": true, 00:09:18.699 "data_offset": 2048, 00:09:18.699 "data_size": 63488 00:09:18.699 }, 00:09:18.699 { 00:09:18.699 "name": "pt2", 00:09:18.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.699 "is_configured": true, 00:09:18.699 "data_offset": 2048, 00:09:18.699 "data_size": 63488 00:09:18.699 } 00:09:18.699 ] 00:09:18.699 }' 00:09:18.699 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.699 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.959 [2024-11-08 16:50:48.387933] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.959 "name": "raid_bdev1", 00:09:18.959 "aliases": [ 00:09:18.959 "16099626-a3db-4808-899e-b61e3cac76e5" 00:09:18.959 ], 00:09:18.959 "product_name": "Raid Volume", 00:09:18.959 "block_size": 512, 00:09:18.959 "num_blocks": 126976, 00:09:18.959 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:18.959 "assigned_rate_limits": { 00:09:18.959 "rw_ios_per_sec": 0, 00:09:18.959 "rw_mbytes_per_sec": 0, 00:09:18.959 "r_mbytes_per_sec": 0, 00:09:18.959 "w_mbytes_per_sec": 0 00:09:18.959 }, 00:09:18.959 "claimed": false, 00:09:18.959 "zoned": false, 00:09:18.959 "supported_io_types": { 00:09:18.959 "read": true, 00:09:18.959 "write": true, 00:09:18.959 "unmap": true, 00:09:18.959 "flush": true, 00:09:18.959 "reset": true, 00:09:18.959 "nvme_admin": false, 00:09:18.959 "nvme_io": false, 00:09:18.959 "nvme_io_md": false, 00:09:18.959 "write_zeroes": true, 00:09:18.959 "zcopy": false, 00:09:18.959 "get_zone_info": false, 00:09:18.959 "zone_management": false, 00:09:18.959 "zone_append": false, 00:09:18.959 "compare": false, 00:09:18.959 "compare_and_write": false, 00:09:18.959 "abort": false, 00:09:18.959 "seek_hole": false, 00:09:18.959 "seek_data": false, 00:09:18.959 "copy": false, 00:09:18.959 "nvme_iov_md": false 00:09:18.959 }, 00:09:18.959 "memory_domains": [ 00:09:18.959 { 00:09:18.959 "dma_device_id": "system", 00:09:18.959 "dma_device_type": 1 00:09:18.959 }, 00:09:18.959 { 00:09:18.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.959 "dma_device_type": 2 00:09:18.959 }, 00:09:18.959 { 00:09:18.959 "dma_device_id": "system", 00:09:18.959 "dma_device_type": 1 00:09:18.959 }, 00:09:18.959 { 00:09:18.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.959 "dma_device_type": 2 00:09:18.959 } 00:09:18.959 ], 00:09:18.959 "driver_specific": { 00:09:18.959 "raid": { 00:09:18.959 "uuid": "16099626-a3db-4808-899e-b61e3cac76e5", 00:09:18.959 "strip_size_kb": 64, 00:09:18.959 "state": "online", 00:09:18.959 "raid_level": "concat", 00:09:18.959 "superblock": true, 00:09:18.959 "num_base_bdevs": 2, 00:09:18.959 "num_base_bdevs_discovered": 2, 00:09:18.959 "num_base_bdevs_operational": 2, 00:09:18.959 "base_bdevs_list": [ 00:09:18.959 { 00:09:18.959 "name": "pt1", 00:09:18.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.959 "is_configured": true, 00:09:18.959 "data_offset": 2048, 00:09:18.959 "data_size": 63488 00:09:18.959 }, 00:09:18.959 { 00:09:18.959 "name": "pt2", 00:09:18.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.959 "is_configured": true, 00:09:18.959 "data_offset": 2048, 00:09:18.959 "data_size": 63488 00:09:18.959 } 00:09:18.959 ] 00:09:18.959 } 00:09:18.959 } 00:09:18.959 }' 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.959 pt2' 00:09:18.959 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.219 [2024-11-08 16:50:48.639443] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 16099626-a3db-4808-899e-b61e3cac76e5 '!=' 16099626-a3db-4808-899e-b61e3cac76e5 ']' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73530 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73530 ']' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73530 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73530 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.219 killing process with pid 73530 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73530' 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73530 00:09:19.219 [2024-11-08 16:50:48.722999] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.219 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73530 00:09:19.219 [2024-11-08 16:50:48.723076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.219 [2024-11-08 16:50:48.723140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.219 [2024-11-08 16:50:48.723157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:19.497 [2024-11-08 16:50:48.745918] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.497 16:50:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:19.497 00:09:19.497 real 0m3.313s 00:09:19.497 user 0m5.137s 00:09:19.497 sys 0m0.676s 00:09:19.497 ************************************ 00:09:19.497 END TEST raid_superblock_test 00:09:19.497 ************************************ 00:09:19.497 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.497 16:50:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.757 16:50:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:19.758 16:50:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:19.758 16:50:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.758 16:50:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.758 ************************************ 00:09:19.758 START TEST raid_read_error_test 00:09:19.758 ************************************ 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dhxEF9QsZD 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73726 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73726 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73726 ']' 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.758 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.758 [2024-11-08 16:50:49.151074] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:19.758 [2024-11-08 16:50:49.151308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73726 ] 00:09:20.018 [2024-11-08 16:50:49.312289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.018 [2024-11-08 16:50:49.356276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.018 [2024-11-08 16:50:49.398151] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.018 [2024-11-08 16:50:49.398194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.591 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.591 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:20.591 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.591 16:50:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:20.591 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 BaseBdev1_malloc 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 true 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [2024-11-08 16:50:50.012762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.591 [2024-11-08 16:50:50.012885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.591 [2024-11-08 16:50:50.012931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:20.591 [2024-11-08 16:50:50.012975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.591 [2024-11-08 16:50:50.015330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.591 [2024-11-08 16:50:50.015411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.591 BaseBdev1 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 BaseBdev2_malloc 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 true 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [2024-11-08 16:50:50.061112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.591 [2024-11-08 16:50:50.061172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.591 [2024-11-08 16:50:50.061194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:20.591 [2024-11-08 16:50:50.061204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.591 [2024-11-08 16:50:50.063499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.591 [2024-11-08 16:50:50.063541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.591 BaseBdev2 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [2024-11-08 16:50:50.073119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.591 [2024-11-08 16:50:50.075047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.591 [2024-11-08 16:50:50.075264] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:20.591 [2024-11-08 16:50:50.075315] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:20.591 [2024-11-08 16:50:50.075629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:20.591 [2024-11-08 16:50:50.075834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:20.591 [2024-11-08 16:50:50.075887] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:20.591 [2024-11-08 16:50:50.076088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.591 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.851 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.851 "name": "raid_bdev1", 00:09:20.851 "uuid": "2a29d871-33dd-4d35-a5b7-7ed6d566603e", 00:09:20.851 "strip_size_kb": 64, 00:09:20.851 "state": "online", 00:09:20.851 "raid_level": "concat", 00:09:20.851 "superblock": true, 00:09:20.851 "num_base_bdevs": 2, 00:09:20.851 "num_base_bdevs_discovered": 2, 00:09:20.851 "num_base_bdevs_operational": 2, 00:09:20.851 "base_bdevs_list": [ 00:09:20.851 { 00:09:20.851 "name": "BaseBdev1", 00:09:20.851 "uuid": "345b1666-8ecf-5b3f-aa4d-df2d062a11a6", 00:09:20.851 "is_configured": true, 00:09:20.851 "data_offset": 2048, 00:09:20.851 "data_size": 63488 00:09:20.851 }, 00:09:20.851 { 00:09:20.851 "name": "BaseBdev2", 00:09:20.851 "uuid": "ea1188a4-7154-5483-9fcc-8309213219c1", 00:09:20.851 "is_configured": true, 00:09:20.851 "data_offset": 2048, 00:09:20.851 "data_size": 63488 00:09:20.851 } 00:09:20.851 ] 00:09:20.851 }' 00:09:20.851 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.851 16:50:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.111 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:21.111 16:50:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.371 [2024-11-08 16:50:50.652473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.308 "name": "raid_bdev1", 00:09:22.308 "uuid": "2a29d871-33dd-4d35-a5b7-7ed6d566603e", 00:09:22.308 "strip_size_kb": 64, 00:09:22.308 "state": "online", 00:09:22.308 "raid_level": "concat", 00:09:22.308 "superblock": true, 00:09:22.308 "num_base_bdevs": 2, 00:09:22.308 "num_base_bdevs_discovered": 2, 00:09:22.308 "num_base_bdevs_operational": 2, 00:09:22.308 "base_bdevs_list": [ 00:09:22.308 { 00:09:22.308 "name": "BaseBdev1", 00:09:22.308 "uuid": "345b1666-8ecf-5b3f-aa4d-df2d062a11a6", 00:09:22.308 "is_configured": true, 00:09:22.308 "data_offset": 2048, 00:09:22.308 "data_size": 63488 00:09:22.308 }, 00:09:22.308 { 00:09:22.308 "name": "BaseBdev2", 00:09:22.308 "uuid": "ea1188a4-7154-5483-9fcc-8309213219c1", 00:09:22.308 "is_configured": true, 00:09:22.308 "data_offset": 2048, 00:09:22.308 "data_size": 63488 00:09:22.308 } 00:09:22.308 ] 00:09:22.308 }' 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.308 16:50:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.569 [2024-11-08 16:50:52.028256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.569 [2024-11-08 16:50:52.028339] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.569 [2024-11-08 16:50:52.030897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.569 [2024-11-08 16:50:52.030981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.569 [2024-11-08 16:50:52.031035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.569 [2024-11-08 16:50:52.031076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:22.569 { 00:09:22.569 "results": [ 00:09:22.569 { 00:09:22.569 "job": "raid_bdev1", 00:09:22.569 "core_mask": "0x1", 00:09:22.569 "workload": "randrw", 00:09:22.569 "percentage": 50, 00:09:22.569 "status": "finished", 00:09:22.569 "queue_depth": 1, 00:09:22.569 "io_size": 131072, 00:09:22.569 "runtime": 1.376689, 00:09:22.569 "iops": 17377.926314512573, 00:09:22.569 "mibps": 2172.2407893140717, 00:09:22.569 "io_failed": 1, 00:09:22.569 "io_timeout": 0, 00:09:22.569 "avg_latency_us": 79.50195394085411, 00:09:22.569 "min_latency_us": 24.929257641921396, 00:09:22.569 "max_latency_us": 1445.2262008733624 00:09:22.569 } 00:09:22.569 ], 00:09:22.569 "core_count": 1 00:09:22.569 } 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73726 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73726 ']' 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73726 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73726 00:09:22.569 killing process with pid 73726 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73726' 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73726 00:09:22.569 [2024-11-08 16:50:52.094850] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.569 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73726 00:09:22.828 [2024-11-08 16:50:52.110668] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.828 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dhxEF9QsZD 00:09:22.828 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:22.828 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:23.089 00:09:23.089 real 0m3.306s 00:09:23.089 user 0m4.204s 00:09:23.089 sys 0m0.527s 00:09:23.089 ************************************ 00:09:23.089 END TEST raid_read_error_test 00:09:23.089 ************************************ 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.089 16:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.089 16:50:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:23.089 16:50:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:23.089 16:50:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.089 16:50:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.089 ************************************ 00:09:23.089 START TEST raid_write_error_test 00:09:23.089 ************************************ 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tcKP5MmcUP 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73855 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73855 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73855 ']' 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.089 16:50:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.089 [2024-11-08 16:50:52.526512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:23.089 [2024-11-08 16:50:52.526741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73855 ] 00:09:23.349 [2024-11-08 16:50:52.690552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.349 [2024-11-08 16:50:52.735263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.349 [2024-11-08 16:50:52.777180] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.349 [2024-11-08 16:50:52.777221] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.918 BaseBdev1_malloc 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.918 true 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.918 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.919 [2024-11-08 16:50:53.374943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:23.919 [2024-11-08 16:50:53.375038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.919 [2024-11-08 16:50:53.375094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:23.919 [2024-11-08 16:50:53.375125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.919 [2024-11-08 16:50:53.377305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.919 [2024-11-08 16:50:53.377379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:23.919 BaseBdev1 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.919 BaseBdev2_malloc 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.919 true 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.919 [2024-11-08 16:50:53.418294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.919 [2024-11-08 16:50:53.418387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.919 [2024-11-08 16:50:53.418439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:23.919 [2024-11-08 16:50:53.418467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.919 [2024-11-08 16:50:53.420571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.919 [2024-11-08 16:50:53.420656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.919 BaseBdev2 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.919 [2024-11-08 16:50:53.430325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.919 [2024-11-08 16:50:53.432253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.919 [2024-11-08 16:50:53.432464] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:23.919 [2024-11-08 16:50:53.432513] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:23.919 [2024-11-08 16:50:53.432791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:23.919 [2024-11-08 16:50:53.432973] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:23.919 [2024-11-08 16:50:53.433020] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:23.919 [2024-11-08 16:50:53.433181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.919 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.179 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.179 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.179 "name": "raid_bdev1", 00:09:24.179 "uuid": "a4d27b36-1211-4c34-b764-1e6d7981a8c8", 00:09:24.179 "strip_size_kb": 64, 00:09:24.179 "state": "online", 00:09:24.179 "raid_level": "concat", 00:09:24.179 "superblock": true, 00:09:24.179 "num_base_bdevs": 2, 00:09:24.179 "num_base_bdevs_discovered": 2, 00:09:24.179 "num_base_bdevs_operational": 2, 00:09:24.179 "base_bdevs_list": [ 00:09:24.179 { 00:09:24.179 "name": "BaseBdev1", 00:09:24.179 "uuid": "6ca10028-7613-5860-b691-95c9f598fa2d", 00:09:24.179 "is_configured": true, 00:09:24.179 "data_offset": 2048, 00:09:24.179 "data_size": 63488 00:09:24.179 }, 00:09:24.179 { 00:09:24.179 "name": "BaseBdev2", 00:09:24.179 "uuid": "22fec2aa-8011-5865-a523-252d7bda7354", 00:09:24.179 "is_configured": true, 00:09:24.179 "data_offset": 2048, 00:09:24.179 "data_size": 63488 00:09:24.179 } 00:09:24.179 ] 00:09:24.179 }' 00:09:24.179 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.179 16:50:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.438 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.438 16:50:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.697 [2024-11-08 16:50:53.973779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.678 "name": "raid_bdev1", 00:09:25.678 "uuid": "a4d27b36-1211-4c34-b764-1e6d7981a8c8", 00:09:25.678 "strip_size_kb": 64, 00:09:25.678 "state": "online", 00:09:25.678 "raid_level": "concat", 00:09:25.678 "superblock": true, 00:09:25.678 "num_base_bdevs": 2, 00:09:25.678 "num_base_bdevs_discovered": 2, 00:09:25.678 "num_base_bdevs_operational": 2, 00:09:25.678 "base_bdevs_list": [ 00:09:25.678 { 00:09:25.678 "name": "BaseBdev1", 00:09:25.678 "uuid": "6ca10028-7613-5860-b691-95c9f598fa2d", 00:09:25.678 "is_configured": true, 00:09:25.678 "data_offset": 2048, 00:09:25.678 "data_size": 63488 00:09:25.678 }, 00:09:25.678 { 00:09:25.678 "name": "BaseBdev2", 00:09:25.678 "uuid": "22fec2aa-8011-5865-a523-252d7bda7354", 00:09:25.678 "is_configured": true, 00:09:25.678 "data_offset": 2048, 00:09:25.678 "data_size": 63488 00:09:25.678 } 00:09:25.678 ] 00:09:25.678 }' 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.678 16:50:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.944 [2024-11-08 16:50:55.369640] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.944 [2024-11-08 16:50:55.369731] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.944 [2024-11-08 16:50:55.372275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.944 [2024-11-08 16:50:55.372373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.944 [2024-11-08 16:50:55.372432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.944 [2024-11-08 16:50:55.372484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73855 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73855 ']' 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73855 00:09:25.944 { 00:09:25.944 "results": [ 00:09:25.944 { 00:09:25.944 "job": "raid_bdev1", 00:09:25.944 "core_mask": "0x1", 00:09:25.944 "workload": "randrw", 00:09:25.944 "percentage": 50, 00:09:25.944 "status": "finished", 00:09:25.944 "queue_depth": 1, 00:09:25.944 "io_size": 131072, 00:09:25.944 "runtime": 1.396816, 00:09:25.944 "iops": 16763.84004765123, 00:09:25.944 "mibps": 2095.4800059564036, 00:09:25.944 "io_failed": 1, 00:09:25.944 "io_timeout": 0, 00:09:25.944 "avg_latency_us": 82.4895185131244, 00:09:25.944 "min_latency_us": 26.494323144104804, 00:09:25.944 "max_latency_us": 1416.6078602620087 00:09:25.944 } 00:09:25.944 ], 00:09:25.944 "core_count": 1 00:09:25.944 } 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73855 00:09:25.944 killing process with pid 73855 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73855' 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73855 00:09:25.944 [2024-11-08 16:50:55.415446] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.944 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73855 00:09:25.944 [2024-11-08 16:50:55.430980] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tcKP5MmcUP 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:26.204 00:09:26.204 real 0m3.248s 00:09:26.204 user 0m4.151s 00:09:26.204 sys 0m0.508s 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.204 16:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.204 ************************************ 00:09:26.204 END TEST raid_write_error_test 00:09:26.204 ************************************ 00:09:26.464 16:50:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:26.464 16:50:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:26.464 16:50:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:26.464 16:50:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.464 16:50:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.464 ************************************ 00:09:26.464 START TEST raid_state_function_test 00:09:26.464 ************************************ 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:26.464 Process raid pid: 73988 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73988 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73988' 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73988 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73988 ']' 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.464 16:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.464 [2024-11-08 16:50:55.835721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:26.464 [2024-11-08 16:50:55.835957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.724 [2024-11-08 16:50:56.000179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.724 [2024-11-08 16:50:56.044528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.724 [2024-11-08 16:50:56.086573] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.724 [2024-11-08 16:50:56.086705] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.294 [2024-11-08 16:50:56.671826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.294 [2024-11-08 16:50:56.671937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.294 [2024-11-08 16:50:56.671972] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.294 [2024-11-08 16:50:56.671997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.294 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.294 "name": "Existed_Raid", 00:09:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.295 "strip_size_kb": 0, 00:09:27.295 "state": "configuring", 00:09:27.295 "raid_level": "raid1", 00:09:27.295 "superblock": false, 00:09:27.295 "num_base_bdevs": 2, 00:09:27.295 "num_base_bdevs_discovered": 0, 00:09:27.295 "num_base_bdevs_operational": 2, 00:09:27.295 "base_bdevs_list": [ 00:09:27.295 { 00:09:27.295 "name": "BaseBdev1", 00:09:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.295 "is_configured": false, 00:09:27.295 "data_offset": 0, 00:09:27.295 "data_size": 0 00:09:27.295 }, 00:09:27.295 { 00:09:27.295 "name": "BaseBdev2", 00:09:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.295 "is_configured": false, 00:09:27.295 "data_offset": 0, 00:09:27.295 "data_size": 0 00:09:27.295 } 00:09:27.295 ] 00:09:27.295 }' 00:09:27.295 16:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.295 16:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.864 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.864 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.864 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.865 [2024-11-08 16:50:57.107037] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.865 [2024-11-08 16:50:57.107131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.865 [2024-11-08 16:50:57.119042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.865 [2024-11-08 16:50:57.119084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.865 [2024-11-08 16:50:57.119093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.865 [2024-11-08 16:50:57.119104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.865 [2024-11-08 16:50:57.139907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.865 BaseBdev1 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.865 [ 00:09:27.865 { 00:09:27.865 "name": "BaseBdev1", 00:09:27.865 "aliases": [ 00:09:27.865 "4dd709a6-b4c6-436f-a087-04b784565097" 00:09:27.865 ], 00:09:27.865 "product_name": "Malloc disk", 00:09:27.865 "block_size": 512, 00:09:27.865 "num_blocks": 65536, 00:09:27.865 "uuid": "4dd709a6-b4c6-436f-a087-04b784565097", 00:09:27.865 "assigned_rate_limits": { 00:09:27.865 "rw_ios_per_sec": 0, 00:09:27.865 "rw_mbytes_per_sec": 0, 00:09:27.865 "r_mbytes_per_sec": 0, 00:09:27.865 "w_mbytes_per_sec": 0 00:09:27.865 }, 00:09:27.865 "claimed": true, 00:09:27.865 "claim_type": "exclusive_write", 00:09:27.865 "zoned": false, 00:09:27.865 "supported_io_types": { 00:09:27.865 "read": true, 00:09:27.865 "write": true, 00:09:27.865 "unmap": true, 00:09:27.865 "flush": true, 00:09:27.865 "reset": true, 00:09:27.865 "nvme_admin": false, 00:09:27.865 "nvme_io": false, 00:09:27.865 "nvme_io_md": false, 00:09:27.865 "write_zeroes": true, 00:09:27.865 "zcopy": true, 00:09:27.865 "get_zone_info": false, 00:09:27.865 "zone_management": false, 00:09:27.865 "zone_append": false, 00:09:27.865 "compare": false, 00:09:27.865 "compare_and_write": false, 00:09:27.865 "abort": true, 00:09:27.865 "seek_hole": false, 00:09:27.865 "seek_data": false, 00:09:27.865 "copy": true, 00:09:27.865 "nvme_iov_md": false 00:09:27.865 }, 00:09:27.865 "memory_domains": [ 00:09:27.865 { 00:09:27.865 "dma_device_id": "system", 00:09:27.865 "dma_device_type": 1 00:09:27.865 }, 00:09:27.865 { 00:09:27.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.865 "dma_device_type": 2 00:09:27.865 } 00:09:27.865 ], 00:09:27.865 "driver_specific": {} 00:09:27.865 } 00:09:27.865 ] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.865 "name": "Existed_Raid", 00:09:27.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.865 "strip_size_kb": 0, 00:09:27.865 "state": "configuring", 00:09:27.865 "raid_level": "raid1", 00:09:27.865 "superblock": false, 00:09:27.865 "num_base_bdevs": 2, 00:09:27.865 "num_base_bdevs_discovered": 1, 00:09:27.865 "num_base_bdevs_operational": 2, 00:09:27.865 "base_bdevs_list": [ 00:09:27.865 { 00:09:27.865 "name": "BaseBdev1", 00:09:27.865 "uuid": "4dd709a6-b4c6-436f-a087-04b784565097", 00:09:27.865 "is_configured": true, 00:09:27.865 "data_offset": 0, 00:09:27.865 "data_size": 65536 00:09:27.865 }, 00:09:27.865 { 00:09:27.865 "name": "BaseBdev2", 00:09:27.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.865 "is_configured": false, 00:09:27.865 "data_offset": 0, 00:09:27.865 "data_size": 0 00:09:27.865 } 00:09:27.865 ] 00:09:27.865 }' 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.865 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.125 [2024-11-08 16:50:57.587253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.125 [2024-11-08 16:50:57.587369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.125 [2024-11-08 16:50:57.599294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.125 [2024-11-08 16:50:57.601322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.125 [2024-11-08 16:50:57.601402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.125 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.385 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.385 "name": "Existed_Raid", 00:09:28.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.385 "strip_size_kb": 0, 00:09:28.385 "state": "configuring", 00:09:28.385 "raid_level": "raid1", 00:09:28.385 "superblock": false, 00:09:28.385 "num_base_bdevs": 2, 00:09:28.385 "num_base_bdevs_discovered": 1, 00:09:28.385 "num_base_bdevs_operational": 2, 00:09:28.385 "base_bdevs_list": [ 00:09:28.385 { 00:09:28.385 "name": "BaseBdev1", 00:09:28.385 "uuid": "4dd709a6-b4c6-436f-a087-04b784565097", 00:09:28.385 "is_configured": true, 00:09:28.385 "data_offset": 0, 00:09:28.385 "data_size": 65536 00:09:28.385 }, 00:09:28.385 { 00:09:28.385 "name": "BaseBdev2", 00:09:28.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.385 "is_configured": false, 00:09:28.385 "data_offset": 0, 00:09:28.385 "data_size": 0 00:09:28.385 } 00:09:28.385 ] 00:09:28.385 }' 00:09:28.385 16:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.385 16:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 [2024-11-08 16:50:58.066847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.645 [2024-11-08 16:50:58.066990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:28.645 [2024-11-08 16:50:58.067023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:28.645 [2024-11-08 16:50:58.067410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:28.645 [2024-11-08 16:50:58.067629] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:28.645 [2024-11-08 16:50:58.067716] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:28.645 [2024-11-08 16:50:58.068002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.645 BaseBdev2 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.645 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.646 [ 00:09:28.646 { 00:09:28.646 "name": "BaseBdev2", 00:09:28.646 "aliases": [ 00:09:28.646 "f31bb04d-79c5-41dc-81f6-e312d6007108" 00:09:28.646 ], 00:09:28.646 "product_name": "Malloc disk", 00:09:28.646 "block_size": 512, 00:09:28.646 "num_blocks": 65536, 00:09:28.646 "uuid": "f31bb04d-79c5-41dc-81f6-e312d6007108", 00:09:28.646 "assigned_rate_limits": { 00:09:28.646 "rw_ios_per_sec": 0, 00:09:28.646 "rw_mbytes_per_sec": 0, 00:09:28.646 "r_mbytes_per_sec": 0, 00:09:28.646 "w_mbytes_per_sec": 0 00:09:28.646 }, 00:09:28.646 "claimed": true, 00:09:28.646 "claim_type": "exclusive_write", 00:09:28.646 "zoned": false, 00:09:28.646 "supported_io_types": { 00:09:28.646 "read": true, 00:09:28.646 "write": true, 00:09:28.646 "unmap": true, 00:09:28.646 "flush": true, 00:09:28.646 "reset": true, 00:09:28.646 "nvme_admin": false, 00:09:28.646 "nvme_io": false, 00:09:28.646 "nvme_io_md": false, 00:09:28.646 "write_zeroes": true, 00:09:28.646 "zcopy": true, 00:09:28.646 "get_zone_info": false, 00:09:28.646 "zone_management": false, 00:09:28.646 "zone_append": false, 00:09:28.646 "compare": false, 00:09:28.646 "compare_and_write": false, 00:09:28.646 "abort": true, 00:09:28.646 "seek_hole": false, 00:09:28.646 "seek_data": false, 00:09:28.646 "copy": true, 00:09:28.646 "nvme_iov_md": false 00:09:28.646 }, 00:09:28.646 "memory_domains": [ 00:09:28.646 { 00:09:28.646 "dma_device_id": "system", 00:09:28.646 "dma_device_type": 1 00:09:28.646 }, 00:09:28.646 { 00:09:28.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.646 "dma_device_type": 2 00:09:28.646 } 00:09:28.646 ], 00:09:28.646 "driver_specific": {} 00:09:28.646 } 00:09:28.646 ] 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.646 "name": "Existed_Raid", 00:09:28.646 "uuid": "7ce29e45-a070-4905-918d-ecb95cc98f3e", 00:09:28.646 "strip_size_kb": 0, 00:09:28.646 "state": "online", 00:09:28.646 "raid_level": "raid1", 00:09:28.646 "superblock": false, 00:09:28.646 "num_base_bdevs": 2, 00:09:28.646 "num_base_bdevs_discovered": 2, 00:09:28.646 "num_base_bdevs_operational": 2, 00:09:28.646 "base_bdevs_list": [ 00:09:28.646 { 00:09:28.646 "name": "BaseBdev1", 00:09:28.646 "uuid": "4dd709a6-b4c6-436f-a087-04b784565097", 00:09:28.646 "is_configured": true, 00:09:28.646 "data_offset": 0, 00:09:28.646 "data_size": 65536 00:09:28.646 }, 00:09:28.646 { 00:09:28.646 "name": "BaseBdev2", 00:09:28.646 "uuid": "f31bb04d-79c5-41dc-81f6-e312d6007108", 00:09:28.646 "is_configured": true, 00:09:28.646 "data_offset": 0, 00:09:28.646 "data_size": 65536 00:09:28.646 } 00:09:28.646 ] 00:09:28.646 }' 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.646 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.216 [2024-11-08 16:50:58.598292] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.216 "name": "Existed_Raid", 00:09:29.216 "aliases": [ 00:09:29.216 "7ce29e45-a070-4905-918d-ecb95cc98f3e" 00:09:29.216 ], 00:09:29.216 "product_name": "Raid Volume", 00:09:29.216 "block_size": 512, 00:09:29.216 "num_blocks": 65536, 00:09:29.216 "uuid": "7ce29e45-a070-4905-918d-ecb95cc98f3e", 00:09:29.216 "assigned_rate_limits": { 00:09:29.216 "rw_ios_per_sec": 0, 00:09:29.216 "rw_mbytes_per_sec": 0, 00:09:29.216 "r_mbytes_per_sec": 0, 00:09:29.216 "w_mbytes_per_sec": 0 00:09:29.216 }, 00:09:29.216 "claimed": false, 00:09:29.216 "zoned": false, 00:09:29.216 "supported_io_types": { 00:09:29.216 "read": true, 00:09:29.216 "write": true, 00:09:29.216 "unmap": false, 00:09:29.216 "flush": false, 00:09:29.216 "reset": true, 00:09:29.216 "nvme_admin": false, 00:09:29.216 "nvme_io": false, 00:09:29.216 "nvme_io_md": false, 00:09:29.216 "write_zeroes": true, 00:09:29.216 "zcopy": false, 00:09:29.216 "get_zone_info": false, 00:09:29.216 "zone_management": false, 00:09:29.216 "zone_append": false, 00:09:29.216 "compare": false, 00:09:29.216 "compare_and_write": false, 00:09:29.216 "abort": false, 00:09:29.216 "seek_hole": false, 00:09:29.216 "seek_data": false, 00:09:29.216 "copy": false, 00:09:29.216 "nvme_iov_md": false 00:09:29.216 }, 00:09:29.216 "memory_domains": [ 00:09:29.216 { 00:09:29.216 "dma_device_id": "system", 00:09:29.216 "dma_device_type": 1 00:09:29.216 }, 00:09:29.216 { 00:09:29.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.216 "dma_device_type": 2 00:09:29.216 }, 00:09:29.216 { 00:09:29.216 "dma_device_id": "system", 00:09:29.216 "dma_device_type": 1 00:09:29.216 }, 00:09:29.216 { 00:09:29.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.216 "dma_device_type": 2 00:09:29.216 } 00:09:29.216 ], 00:09:29.216 "driver_specific": { 00:09:29.216 "raid": { 00:09:29.216 "uuid": "7ce29e45-a070-4905-918d-ecb95cc98f3e", 00:09:29.216 "strip_size_kb": 0, 00:09:29.216 "state": "online", 00:09:29.216 "raid_level": "raid1", 00:09:29.216 "superblock": false, 00:09:29.216 "num_base_bdevs": 2, 00:09:29.216 "num_base_bdevs_discovered": 2, 00:09:29.216 "num_base_bdevs_operational": 2, 00:09:29.216 "base_bdevs_list": [ 00:09:29.216 { 00:09:29.216 "name": "BaseBdev1", 00:09:29.216 "uuid": "4dd709a6-b4c6-436f-a087-04b784565097", 00:09:29.216 "is_configured": true, 00:09:29.216 "data_offset": 0, 00:09:29.216 "data_size": 65536 00:09:29.216 }, 00:09:29.216 { 00:09:29.216 "name": "BaseBdev2", 00:09:29.216 "uuid": "f31bb04d-79c5-41dc-81f6-e312d6007108", 00:09:29.216 "is_configured": true, 00:09:29.216 "data_offset": 0, 00:09:29.216 "data_size": 65536 00:09:29.216 } 00:09:29.216 ] 00:09:29.216 } 00:09:29.216 } 00:09:29.216 }' 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.216 BaseBdev2' 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.216 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.217 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.477 [2024-11-08 16:50:58.841595] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.477 "name": "Existed_Raid", 00:09:29.477 "uuid": "7ce29e45-a070-4905-918d-ecb95cc98f3e", 00:09:29.477 "strip_size_kb": 0, 00:09:29.477 "state": "online", 00:09:29.477 "raid_level": "raid1", 00:09:29.477 "superblock": false, 00:09:29.477 "num_base_bdevs": 2, 00:09:29.477 "num_base_bdevs_discovered": 1, 00:09:29.477 "num_base_bdevs_operational": 1, 00:09:29.477 "base_bdevs_list": [ 00:09:29.477 { 00:09:29.477 "name": null, 00:09:29.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.477 "is_configured": false, 00:09:29.477 "data_offset": 0, 00:09:29.477 "data_size": 65536 00:09:29.477 }, 00:09:29.477 { 00:09:29.477 "name": "BaseBdev2", 00:09:29.477 "uuid": "f31bb04d-79c5-41dc-81f6-e312d6007108", 00:09:29.477 "is_configured": true, 00:09:29.477 "data_offset": 0, 00:09:29.477 "data_size": 65536 00:09:29.477 } 00:09:29.477 ] 00:09:29.477 }' 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.477 16:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.738 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.738 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.738 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.738 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.738 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.738 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 [2024-11-08 16:50:59.316089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.998 [2024-11-08 16:50:59.316190] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.998 [2024-11-08 16:50:59.327862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.998 [2024-11-08 16:50:59.327914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.998 [2024-11-08 16:50:59.327926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73988 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73988 ']' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73988 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73988 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.998 killing process with pid 73988 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73988' 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73988 00:09:29.998 [2024-11-08 16:50:59.422481] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.998 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73988 00:09:29.998 [2024-11-08 16:50:59.423457] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.259 16:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:30.260 00:09:30.260 real 0m3.926s 00:09:30.260 user 0m6.184s 00:09:30.260 sys 0m0.772s 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.260 ************************************ 00:09:30.260 END TEST raid_state_function_test 00:09:30.260 ************************************ 00:09:30.260 16:50:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:30.260 16:50:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:30.260 16:50:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.260 16:50:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.260 ************************************ 00:09:30.260 START TEST raid_state_function_test_sb 00:09:30.260 ************************************ 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74230 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74230' 00:09:30.260 Process raid pid: 74230 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74230 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74230 ']' 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.260 16:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.520 [2024-11-08 16:50:59.834544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:30.520 [2024-11-08 16:50:59.834765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.520 [2024-11-08 16:51:00.021748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.780 [2024-11-08 16:51:00.067885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.780 [2024-11-08 16:51:00.109737] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.780 [2024-11-08 16:51:00.109772] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.350 [2024-11-08 16:51:00.671151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.350 [2024-11-08 16:51:00.671248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.350 [2024-11-08 16:51:00.671291] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.350 [2024-11-08 16:51:00.671315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.350 "name": "Existed_Raid", 00:09:31.350 "uuid": "42ff003c-1a84-444c-aa60-6ccd694a6d29", 00:09:31.350 "strip_size_kb": 0, 00:09:31.350 "state": "configuring", 00:09:31.350 "raid_level": "raid1", 00:09:31.350 "superblock": true, 00:09:31.350 "num_base_bdevs": 2, 00:09:31.350 "num_base_bdevs_discovered": 0, 00:09:31.350 "num_base_bdevs_operational": 2, 00:09:31.350 "base_bdevs_list": [ 00:09:31.350 { 00:09:31.350 "name": "BaseBdev1", 00:09:31.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.350 "is_configured": false, 00:09:31.350 "data_offset": 0, 00:09:31.350 "data_size": 0 00:09:31.350 }, 00:09:31.350 { 00:09:31.350 "name": "BaseBdev2", 00:09:31.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.350 "is_configured": false, 00:09:31.350 "data_offset": 0, 00:09:31.350 "data_size": 0 00:09:31.350 } 00:09:31.350 ] 00:09:31.350 }' 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.350 16:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.611 [2024-11-08 16:51:01.110251] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.611 [2024-11-08 16:51:01.110342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.611 [2024-11-08 16:51:01.122270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.611 [2024-11-08 16:51:01.122344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.611 [2024-11-08 16:51:01.122386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.611 [2024-11-08 16:51:01.122408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.611 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.871 [2024-11-08 16:51:01.142940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.871 BaseBdev1 00:09:31.871 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.871 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.872 [ 00:09:31.872 { 00:09:31.872 "name": "BaseBdev1", 00:09:31.872 "aliases": [ 00:09:31.872 "ea4a518a-f801-4eef-9797-622f2f59d42f" 00:09:31.872 ], 00:09:31.872 "product_name": "Malloc disk", 00:09:31.872 "block_size": 512, 00:09:31.872 "num_blocks": 65536, 00:09:31.872 "uuid": "ea4a518a-f801-4eef-9797-622f2f59d42f", 00:09:31.872 "assigned_rate_limits": { 00:09:31.872 "rw_ios_per_sec": 0, 00:09:31.872 "rw_mbytes_per_sec": 0, 00:09:31.872 "r_mbytes_per_sec": 0, 00:09:31.872 "w_mbytes_per_sec": 0 00:09:31.872 }, 00:09:31.872 "claimed": true, 00:09:31.872 "claim_type": "exclusive_write", 00:09:31.872 "zoned": false, 00:09:31.872 "supported_io_types": { 00:09:31.872 "read": true, 00:09:31.872 "write": true, 00:09:31.872 "unmap": true, 00:09:31.872 "flush": true, 00:09:31.872 "reset": true, 00:09:31.872 "nvme_admin": false, 00:09:31.872 "nvme_io": false, 00:09:31.872 "nvme_io_md": false, 00:09:31.872 "write_zeroes": true, 00:09:31.872 "zcopy": true, 00:09:31.872 "get_zone_info": false, 00:09:31.872 "zone_management": false, 00:09:31.872 "zone_append": false, 00:09:31.872 "compare": false, 00:09:31.872 "compare_and_write": false, 00:09:31.872 "abort": true, 00:09:31.872 "seek_hole": false, 00:09:31.872 "seek_data": false, 00:09:31.872 "copy": true, 00:09:31.872 "nvme_iov_md": false 00:09:31.872 }, 00:09:31.872 "memory_domains": [ 00:09:31.872 { 00:09:31.872 "dma_device_id": "system", 00:09:31.872 "dma_device_type": 1 00:09:31.872 }, 00:09:31.872 { 00:09:31.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.872 "dma_device_type": 2 00:09:31.872 } 00:09:31.872 ], 00:09:31.872 "driver_specific": {} 00:09:31.872 } 00:09:31.872 ] 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.872 "name": "Existed_Raid", 00:09:31.872 "uuid": "be2e6183-9135-4f2c-9be0-f4868be11a0d", 00:09:31.872 "strip_size_kb": 0, 00:09:31.872 "state": "configuring", 00:09:31.872 "raid_level": "raid1", 00:09:31.872 "superblock": true, 00:09:31.872 "num_base_bdevs": 2, 00:09:31.872 "num_base_bdevs_discovered": 1, 00:09:31.872 "num_base_bdevs_operational": 2, 00:09:31.872 "base_bdevs_list": [ 00:09:31.872 { 00:09:31.872 "name": "BaseBdev1", 00:09:31.872 "uuid": "ea4a518a-f801-4eef-9797-622f2f59d42f", 00:09:31.872 "is_configured": true, 00:09:31.872 "data_offset": 2048, 00:09:31.872 "data_size": 63488 00:09:31.872 }, 00:09:31.872 { 00:09:31.872 "name": "BaseBdev2", 00:09:31.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.872 "is_configured": false, 00:09:31.872 "data_offset": 0, 00:09:31.872 "data_size": 0 00:09:31.872 } 00:09:31.872 ] 00:09:31.872 }' 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.872 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.132 [2024-11-08 16:51:01.642112] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.132 [2024-11-08 16:51:01.642224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.132 [2024-11-08 16:51:01.654183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.132 [2024-11-08 16:51:01.656270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.132 [2024-11-08 16:51:01.656365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.132 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.392 "name": "Existed_Raid", 00:09:32.392 "uuid": "973e161c-5fbb-45bb-beab-4c33910b4261", 00:09:32.392 "strip_size_kb": 0, 00:09:32.392 "state": "configuring", 00:09:32.392 "raid_level": "raid1", 00:09:32.392 "superblock": true, 00:09:32.392 "num_base_bdevs": 2, 00:09:32.392 "num_base_bdevs_discovered": 1, 00:09:32.392 "num_base_bdevs_operational": 2, 00:09:32.392 "base_bdevs_list": [ 00:09:32.392 { 00:09:32.392 "name": "BaseBdev1", 00:09:32.392 "uuid": "ea4a518a-f801-4eef-9797-622f2f59d42f", 00:09:32.392 "is_configured": true, 00:09:32.392 "data_offset": 2048, 00:09:32.392 "data_size": 63488 00:09:32.392 }, 00:09:32.392 { 00:09:32.392 "name": "BaseBdev2", 00:09:32.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.392 "is_configured": false, 00:09:32.392 "data_offset": 0, 00:09:32.392 "data_size": 0 00:09:32.392 } 00:09:32.392 ] 00:09:32.392 }' 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.392 16:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.653 [2024-11-08 16:51:02.094164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.653 [2024-11-08 16:51:02.094475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:32.653 [2024-11-08 16:51:02.094551] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.653 [2024-11-08 16:51:02.094921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:32.653 BaseBdev2 00:09:32.653 [2024-11-08 16:51:02.095124] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:32.653 [2024-11-08 16:51:02.095181] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:32.653 [2024-11-08 16:51:02.095316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.653 [ 00:09:32.653 { 00:09:32.653 "name": "BaseBdev2", 00:09:32.653 "aliases": [ 00:09:32.653 "67fb5a58-3e3c-456a-bf32-79b787d2a3e8" 00:09:32.653 ], 00:09:32.653 "product_name": "Malloc disk", 00:09:32.653 "block_size": 512, 00:09:32.653 "num_blocks": 65536, 00:09:32.653 "uuid": "67fb5a58-3e3c-456a-bf32-79b787d2a3e8", 00:09:32.653 "assigned_rate_limits": { 00:09:32.653 "rw_ios_per_sec": 0, 00:09:32.653 "rw_mbytes_per_sec": 0, 00:09:32.653 "r_mbytes_per_sec": 0, 00:09:32.653 "w_mbytes_per_sec": 0 00:09:32.653 }, 00:09:32.653 "claimed": true, 00:09:32.653 "claim_type": "exclusive_write", 00:09:32.653 "zoned": false, 00:09:32.653 "supported_io_types": { 00:09:32.653 "read": true, 00:09:32.653 "write": true, 00:09:32.653 "unmap": true, 00:09:32.653 "flush": true, 00:09:32.653 "reset": true, 00:09:32.653 "nvme_admin": false, 00:09:32.653 "nvme_io": false, 00:09:32.653 "nvme_io_md": false, 00:09:32.653 "write_zeroes": true, 00:09:32.653 "zcopy": true, 00:09:32.653 "get_zone_info": false, 00:09:32.653 "zone_management": false, 00:09:32.653 "zone_append": false, 00:09:32.653 "compare": false, 00:09:32.653 "compare_and_write": false, 00:09:32.653 "abort": true, 00:09:32.653 "seek_hole": false, 00:09:32.653 "seek_data": false, 00:09:32.653 "copy": true, 00:09:32.653 "nvme_iov_md": false 00:09:32.653 }, 00:09:32.653 "memory_domains": [ 00:09:32.653 { 00:09:32.653 "dma_device_id": "system", 00:09:32.653 "dma_device_type": 1 00:09:32.653 }, 00:09:32.653 { 00:09:32.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.653 "dma_device_type": 2 00:09:32.653 } 00:09:32.653 ], 00:09:32.653 "driver_specific": {} 00:09:32.653 } 00:09:32.653 ] 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.653 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.913 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.913 "name": "Existed_Raid", 00:09:32.913 "uuid": "973e161c-5fbb-45bb-beab-4c33910b4261", 00:09:32.913 "strip_size_kb": 0, 00:09:32.913 "state": "online", 00:09:32.913 "raid_level": "raid1", 00:09:32.913 "superblock": true, 00:09:32.913 "num_base_bdevs": 2, 00:09:32.913 "num_base_bdevs_discovered": 2, 00:09:32.913 "num_base_bdevs_operational": 2, 00:09:32.913 "base_bdevs_list": [ 00:09:32.913 { 00:09:32.913 "name": "BaseBdev1", 00:09:32.913 "uuid": "ea4a518a-f801-4eef-9797-622f2f59d42f", 00:09:32.913 "is_configured": true, 00:09:32.913 "data_offset": 2048, 00:09:32.913 "data_size": 63488 00:09:32.913 }, 00:09:32.913 { 00:09:32.913 "name": "BaseBdev2", 00:09:32.913 "uuid": "67fb5a58-3e3c-456a-bf32-79b787d2a3e8", 00:09:32.913 "is_configured": true, 00:09:32.913 "data_offset": 2048, 00:09:32.913 "data_size": 63488 00:09:32.913 } 00:09:32.913 ] 00:09:32.913 }' 00:09:32.913 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.913 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.177 [2024-11-08 16:51:02.557712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.177 "name": "Existed_Raid", 00:09:33.177 "aliases": [ 00:09:33.177 "973e161c-5fbb-45bb-beab-4c33910b4261" 00:09:33.177 ], 00:09:33.177 "product_name": "Raid Volume", 00:09:33.177 "block_size": 512, 00:09:33.177 "num_blocks": 63488, 00:09:33.177 "uuid": "973e161c-5fbb-45bb-beab-4c33910b4261", 00:09:33.177 "assigned_rate_limits": { 00:09:33.177 "rw_ios_per_sec": 0, 00:09:33.177 "rw_mbytes_per_sec": 0, 00:09:33.177 "r_mbytes_per_sec": 0, 00:09:33.177 "w_mbytes_per_sec": 0 00:09:33.177 }, 00:09:33.177 "claimed": false, 00:09:33.177 "zoned": false, 00:09:33.177 "supported_io_types": { 00:09:33.177 "read": true, 00:09:33.177 "write": true, 00:09:33.177 "unmap": false, 00:09:33.177 "flush": false, 00:09:33.177 "reset": true, 00:09:33.177 "nvme_admin": false, 00:09:33.177 "nvme_io": false, 00:09:33.177 "nvme_io_md": false, 00:09:33.177 "write_zeroes": true, 00:09:33.177 "zcopy": false, 00:09:33.177 "get_zone_info": false, 00:09:33.177 "zone_management": false, 00:09:33.177 "zone_append": false, 00:09:33.177 "compare": false, 00:09:33.177 "compare_and_write": false, 00:09:33.177 "abort": false, 00:09:33.177 "seek_hole": false, 00:09:33.177 "seek_data": false, 00:09:33.177 "copy": false, 00:09:33.177 "nvme_iov_md": false 00:09:33.177 }, 00:09:33.177 "memory_domains": [ 00:09:33.177 { 00:09:33.177 "dma_device_id": "system", 00:09:33.177 "dma_device_type": 1 00:09:33.177 }, 00:09:33.177 { 00:09:33.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.177 "dma_device_type": 2 00:09:33.177 }, 00:09:33.177 { 00:09:33.177 "dma_device_id": "system", 00:09:33.177 "dma_device_type": 1 00:09:33.177 }, 00:09:33.177 { 00:09:33.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.177 "dma_device_type": 2 00:09:33.177 } 00:09:33.177 ], 00:09:33.177 "driver_specific": { 00:09:33.177 "raid": { 00:09:33.177 "uuid": "973e161c-5fbb-45bb-beab-4c33910b4261", 00:09:33.177 "strip_size_kb": 0, 00:09:33.177 "state": "online", 00:09:33.177 "raid_level": "raid1", 00:09:33.177 "superblock": true, 00:09:33.177 "num_base_bdevs": 2, 00:09:33.177 "num_base_bdevs_discovered": 2, 00:09:33.177 "num_base_bdevs_operational": 2, 00:09:33.177 "base_bdevs_list": [ 00:09:33.177 { 00:09:33.177 "name": "BaseBdev1", 00:09:33.177 "uuid": "ea4a518a-f801-4eef-9797-622f2f59d42f", 00:09:33.177 "is_configured": true, 00:09:33.177 "data_offset": 2048, 00:09:33.177 "data_size": 63488 00:09:33.177 }, 00:09:33.177 { 00:09:33.177 "name": "BaseBdev2", 00:09:33.177 "uuid": "67fb5a58-3e3c-456a-bf32-79b787d2a3e8", 00:09:33.177 "is_configured": true, 00:09:33.177 "data_offset": 2048, 00:09:33.177 "data_size": 63488 00:09:33.177 } 00:09:33.177 ] 00:09:33.177 } 00:09:33.177 } 00:09:33.177 }' 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.177 BaseBdev2' 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.177 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.445 [2024-11-08 16:51:02.789058] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.445 "name": "Existed_Raid", 00:09:33.445 "uuid": "973e161c-5fbb-45bb-beab-4c33910b4261", 00:09:33.445 "strip_size_kb": 0, 00:09:33.445 "state": "online", 00:09:33.445 "raid_level": "raid1", 00:09:33.445 "superblock": true, 00:09:33.445 "num_base_bdevs": 2, 00:09:33.445 "num_base_bdevs_discovered": 1, 00:09:33.445 "num_base_bdevs_operational": 1, 00:09:33.445 "base_bdevs_list": [ 00:09:33.445 { 00:09:33.445 "name": null, 00:09:33.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.445 "is_configured": false, 00:09:33.445 "data_offset": 0, 00:09:33.445 "data_size": 63488 00:09:33.445 }, 00:09:33.445 { 00:09:33.445 "name": "BaseBdev2", 00:09:33.445 "uuid": "67fb5a58-3e3c-456a-bf32-79b787d2a3e8", 00:09:33.445 "is_configured": true, 00:09:33.445 "data_offset": 2048, 00:09:33.445 "data_size": 63488 00:09:33.445 } 00:09:33.445 ] 00:09:33.445 }' 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.445 16:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.015 [2024-11-08 16:51:03.323576] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.015 [2024-11-08 16:51:03.323705] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.015 [2024-11-08 16:51:03.335191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.015 [2024-11-08 16:51:03.335331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.015 [2024-11-08 16:51:03.335348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74230 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74230 ']' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74230 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74230 00:09:34.015 killing process with pid 74230 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74230' 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74230 00:09:34.015 [2024-11-08 16:51:03.419092] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.015 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74230 00:09:34.015 [2024-11-08 16:51:03.420151] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.275 16:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:34.275 00:09:34.275 real 0m3.923s 00:09:34.275 user 0m6.191s 00:09:34.275 sys 0m0.735s 00:09:34.275 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.275 16:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.275 ************************************ 00:09:34.275 END TEST raid_state_function_test_sb 00:09:34.275 ************************************ 00:09:34.275 16:51:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:34.275 16:51:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.275 16:51:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.275 16:51:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.275 ************************************ 00:09:34.275 START TEST raid_superblock_test 00:09:34.275 ************************************ 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74461 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74461 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74461 ']' 00:09:34.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.275 16:51:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.535 [2024-11-08 16:51:03.815345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:34.535 [2024-11-08 16:51:03.815466] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74461 ] 00:09:34.535 [2024-11-08 16:51:03.958792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.535 [2024-11-08 16:51:04.001802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.535 [2024-11-08 16:51:04.043829] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.535 [2024-11-08 16:51:04.043868] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.475 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.475 malloc1 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.476 [2024-11-08 16:51:04.689923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.476 [2024-11-08 16:51:04.690049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.476 [2024-11-08 16:51:04.690105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:35.476 [2024-11-08 16:51:04.690142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.476 [2024-11-08 16:51:04.692212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.476 [2024-11-08 16:51:04.692288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.476 pt1 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.476 malloc2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.476 [2024-11-08 16:51:04.728086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.476 [2024-11-08 16:51:04.728206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.476 [2024-11-08 16:51:04.728231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:35.476 [2024-11-08 16:51:04.728244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.476 [2024-11-08 16:51:04.730813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.476 [2024-11-08 16:51:04.730857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.476 pt2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.476 [2024-11-08 16:51:04.736105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.476 [2024-11-08 16:51:04.737906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.476 [2024-11-08 16:51:04.738044] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:35.476 [2024-11-08 16:51:04.738059] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.476 [2024-11-08 16:51:04.738316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:35.476 [2024-11-08 16:51:04.738431] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:35.476 [2024-11-08 16:51:04.738440] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:35.476 [2024-11-08 16:51:04.738566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.476 "name": "raid_bdev1", 00:09:35.476 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:35.476 "strip_size_kb": 0, 00:09:35.476 "state": "online", 00:09:35.476 "raid_level": "raid1", 00:09:35.476 "superblock": true, 00:09:35.476 "num_base_bdevs": 2, 00:09:35.476 "num_base_bdevs_discovered": 2, 00:09:35.476 "num_base_bdevs_operational": 2, 00:09:35.476 "base_bdevs_list": [ 00:09:35.476 { 00:09:35.476 "name": "pt1", 00:09:35.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.476 "is_configured": true, 00:09:35.476 "data_offset": 2048, 00:09:35.476 "data_size": 63488 00:09:35.476 }, 00:09:35.476 { 00:09:35.476 "name": "pt2", 00:09:35.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.476 "is_configured": true, 00:09:35.476 "data_offset": 2048, 00:09:35.476 "data_size": 63488 00:09:35.476 } 00:09:35.476 ] 00:09:35.476 }' 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.476 16:51:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.735 [2024-11-08 16:51:05.155671] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.735 "name": "raid_bdev1", 00:09:35.735 "aliases": [ 00:09:35.735 "a858aa77-954d-4f32-b8ac-9ae3d52697a0" 00:09:35.735 ], 00:09:35.735 "product_name": "Raid Volume", 00:09:35.735 "block_size": 512, 00:09:35.735 "num_blocks": 63488, 00:09:35.735 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:35.735 "assigned_rate_limits": { 00:09:35.735 "rw_ios_per_sec": 0, 00:09:35.735 "rw_mbytes_per_sec": 0, 00:09:35.735 "r_mbytes_per_sec": 0, 00:09:35.735 "w_mbytes_per_sec": 0 00:09:35.735 }, 00:09:35.735 "claimed": false, 00:09:35.735 "zoned": false, 00:09:35.735 "supported_io_types": { 00:09:35.735 "read": true, 00:09:35.735 "write": true, 00:09:35.735 "unmap": false, 00:09:35.735 "flush": false, 00:09:35.735 "reset": true, 00:09:35.735 "nvme_admin": false, 00:09:35.735 "nvme_io": false, 00:09:35.735 "nvme_io_md": false, 00:09:35.735 "write_zeroes": true, 00:09:35.735 "zcopy": false, 00:09:35.735 "get_zone_info": false, 00:09:35.735 "zone_management": false, 00:09:35.735 "zone_append": false, 00:09:35.735 "compare": false, 00:09:35.735 "compare_and_write": false, 00:09:35.735 "abort": false, 00:09:35.735 "seek_hole": false, 00:09:35.735 "seek_data": false, 00:09:35.735 "copy": false, 00:09:35.735 "nvme_iov_md": false 00:09:35.735 }, 00:09:35.735 "memory_domains": [ 00:09:35.735 { 00:09:35.735 "dma_device_id": "system", 00:09:35.735 "dma_device_type": 1 00:09:35.735 }, 00:09:35.735 { 00:09:35.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.735 "dma_device_type": 2 00:09:35.735 }, 00:09:35.735 { 00:09:35.735 "dma_device_id": "system", 00:09:35.735 "dma_device_type": 1 00:09:35.735 }, 00:09:35.735 { 00:09:35.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.735 "dma_device_type": 2 00:09:35.735 } 00:09:35.735 ], 00:09:35.735 "driver_specific": { 00:09:35.735 "raid": { 00:09:35.735 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:35.735 "strip_size_kb": 0, 00:09:35.735 "state": "online", 00:09:35.735 "raid_level": "raid1", 00:09:35.735 "superblock": true, 00:09:35.735 "num_base_bdevs": 2, 00:09:35.735 "num_base_bdevs_discovered": 2, 00:09:35.735 "num_base_bdevs_operational": 2, 00:09:35.735 "base_bdevs_list": [ 00:09:35.735 { 00:09:35.735 "name": "pt1", 00:09:35.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.735 "is_configured": true, 00:09:35.735 "data_offset": 2048, 00:09:35.735 "data_size": 63488 00:09:35.735 }, 00:09:35.735 { 00:09:35.735 "name": "pt2", 00:09:35.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.735 "is_configured": true, 00:09:35.735 "data_offset": 2048, 00:09:35.735 "data_size": 63488 00:09:35.735 } 00:09:35.735 ] 00:09:35.735 } 00:09:35.735 } 00:09:35.735 }' 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:35.735 pt2' 00:09:35.735 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.995 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.996 [2024-11-08 16:51:05.379205] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a858aa77-954d-4f32-b8ac-9ae3d52697a0 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a858aa77-954d-4f32-b8ac-9ae3d52697a0 ']' 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.996 [2024-11-08 16:51:05.426850] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.996 [2024-11-08 16:51:05.426914] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.996 [2024-11-08 16:51:05.427002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.996 [2024-11-08 16:51:05.427096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.996 [2024-11-08 16:51:05.427147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:35.996 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.256 [2024-11-08 16:51:05.554685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:36.256 [2024-11-08 16:51:05.556527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:36.256 [2024-11-08 16:51:05.556603] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:36.256 [2024-11-08 16:51:05.556662] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:36.256 [2024-11-08 16:51:05.556681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.256 [2024-11-08 16:51:05.556691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:36.256 request: 00:09:36.256 { 00:09:36.256 "name": "raid_bdev1", 00:09:36.256 "raid_level": "raid1", 00:09:36.256 "base_bdevs": [ 00:09:36.256 "malloc1", 00:09:36.256 "malloc2" 00:09:36.256 ], 00:09:36.256 "superblock": false, 00:09:36.256 "method": "bdev_raid_create", 00:09:36.256 "req_id": 1 00:09:36.256 } 00:09:36.256 Got JSON-RPC error response 00:09:36.256 response: 00:09:36.256 { 00:09:36.256 "code": -17, 00:09:36.256 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:36.256 } 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:36.256 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.257 [2024-11-08 16:51:05.614542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.257 [2024-11-08 16:51:05.614599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.257 [2024-11-08 16:51:05.614618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:36.257 [2024-11-08 16:51:05.614626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.257 [2024-11-08 16:51:05.616778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.257 [2024-11-08 16:51:05.616812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.257 [2024-11-08 16:51:05.616886] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:36.257 [2024-11-08 16:51:05.616926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.257 pt1 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.257 "name": "raid_bdev1", 00:09:36.257 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:36.257 "strip_size_kb": 0, 00:09:36.257 "state": "configuring", 00:09:36.257 "raid_level": "raid1", 00:09:36.257 "superblock": true, 00:09:36.257 "num_base_bdevs": 2, 00:09:36.257 "num_base_bdevs_discovered": 1, 00:09:36.257 "num_base_bdevs_operational": 2, 00:09:36.257 "base_bdevs_list": [ 00:09:36.257 { 00:09:36.257 "name": "pt1", 00:09:36.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.257 "is_configured": true, 00:09:36.257 "data_offset": 2048, 00:09:36.257 "data_size": 63488 00:09:36.257 }, 00:09:36.257 { 00:09:36.257 "name": null, 00:09:36.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.257 "is_configured": false, 00:09:36.257 "data_offset": 2048, 00:09:36.257 "data_size": 63488 00:09:36.257 } 00:09:36.257 ] 00:09:36.257 }' 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.257 16:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.516 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:36.516 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.776 [2024-11-08 16:51:06.049820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.776 [2024-11-08 16:51:06.049953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.776 [2024-11-08 16:51:06.049998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:36.776 [2024-11-08 16:51:06.050028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.776 [2024-11-08 16:51:06.050536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.776 [2024-11-08 16:51:06.050600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.776 [2024-11-08 16:51:06.050730] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:36.776 [2024-11-08 16:51:06.050787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.776 [2024-11-08 16:51:06.050944] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:36.776 [2024-11-08 16:51:06.050990] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.776 [2024-11-08 16:51:06.051275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:36.776 [2024-11-08 16:51:06.051449] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:36.776 [2024-11-08 16:51:06.051503] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:36.776 [2024-11-08 16:51:06.051675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.776 pt2 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.776 "name": "raid_bdev1", 00:09:36.776 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:36.776 "strip_size_kb": 0, 00:09:36.776 "state": "online", 00:09:36.776 "raid_level": "raid1", 00:09:36.776 "superblock": true, 00:09:36.776 "num_base_bdevs": 2, 00:09:36.776 "num_base_bdevs_discovered": 2, 00:09:36.776 "num_base_bdevs_operational": 2, 00:09:36.776 "base_bdevs_list": [ 00:09:36.776 { 00:09:36.776 "name": "pt1", 00:09:36.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.776 "is_configured": true, 00:09:36.776 "data_offset": 2048, 00:09:36.776 "data_size": 63488 00:09:36.776 }, 00:09:36.776 { 00:09:36.776 "name": "pt2", 00:09:36.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.776 "is_configured": true, 00:09:36.776 "data_offset": 2048, 00:09:36.776 "data_size": 63488 00:09:36.776 } 00:09:36.776 ] 00:09:36.776 }' 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.776 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.036 [2024-11-08 16:51:06.497328] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.036 "name": "raid_bdev1", 00:09:37.036 "aliases": [ 00:09:37.036 "a858aa77-954d-4f32-b8ac-9ae3d52697a0" 00:09:37.036 ], 00:09:37.036 "product_name": "Raid Volume", 00:09:37.036 "block_size": 512, 00:09:37.036 "num_blocks": 63488, 00:09:37.036 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:37.036 "assigned_rate_limits": { 00:09:37.036 "rw_ios_per_sec": 0, 00:09:37.036 "rw_mbytes_per_sec": 0, 00:09:37.036 "r_mbytes_per_sec": 0, 00:09:37.036 "w_mbytes_per_sec": 0 00:09:37.036 }, 00:09:37.036 "claimed": false, 00:09:37.036 "zoned": false, 00:09:37.036 "supported_io_types": { 00:09:37.036 "read": true, 00:09:37.036 "write": true, 00:09:37.036 "unmap": false, 00:09:37.036 "flush": false, 00:09:37.036 "reset": true, 00:09:37.036 "nvme_admin": false, 00:09:37.036 "nvme_io": false, 00:09:37.036 "nvme_io_md": false, 00:09:37.036 "write_zeroes": true, 00:09:37.036 "zcopy": false, 00:09:37.036 "get_zone_info": false, 00:09:37.036 "zone_management": false, 00:09:37.036 "zone_append": false, 00:09:37.036 "compare": false, 00:09:37.036 "compare_and_write": false, 00:09:37.036 "abort": false, 00:09:37.036 "seek_hole": false, 00:09:37.036 "seek_data": false, 00:09:37.036 "copy": false, 00:09:37.036 "nvme_iov_md": false 00:09:37.036 }, 00:09:37.036 "memory_domains": [ 00:09:37.036 { 00:09:37.036 "dma_device_id": "system", 00:09:37.036 "dma_device_type": 1 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.036 "dma_device_type": 2 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "dma_device_id": "system", 00:09:37.036 "dma_device_type": 1 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.036 "dma_device_type": 2 00:09:37.036 } 00:09:37.036 ], 00:09:37.036 "driver_specific": { 00:09:37.036 "raid": { 00:09:37.036 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:37.036 "strip_size_kb": 0, 00:09:37.036 "state": "online", 00:09:37.036 "raid_level": "raid1", 00:09:37.036 "superblock": true, 00:09:37.036 "num_base_bdevs": 2, 00:09:37.036 "num_base_bdevs_discovered": 2, 00:09:37.036 "num_base_bdevs_operational": 2, 00:09:37.036 "base_bdevs_list": [ 00:09:37.036 { 00:09:37.036 "name": "pt1", 00:09:37.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.036 "is_configured": true, 00:09:37.036 "data_offset": 2048, 00:09:37.036 "data_size": 63488 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "name": "pt2", 00:09:37.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.036 "is_configured": true, 00:09:37.036 "data_offset": 2048, 00:09:37.036 "data_size": 63488 00:09:37.036 } 00:09:37.036 ] 00:09:37.036 } 00:09:37.036 } 00:09:37.036 }' 00:09:37.036 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:37.296 pt2' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.296 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 [2024-11-08 16:51:06.736913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a858aa77-954d-4f32-b8ac-9ae3d52697a0 '!=' a858aa77-954d-4f32-b8ac-9ae3d52697a0 ']' 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 [2024-11-08 16:51:06.760617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.297 "name": "raid_bdev1", 00:09:37.297 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:37.297 "strip_size_kb": 0, 00:09:37.297 "state": "online", 00:09:37.297 "raid_level": "raid1", 00:09:37.297 "superblock": true, 00:09:37.297 "num_base_bdevs": 2, 00:09:37.297 "num_base_bdevs_discovered": 1, 00:09:37.297 "num_base_bdevs_operational": 1, 00:09:37.297 "base_bdevs_list": [ 00:09:37.297 { 00:09:37.297 "name": null, 00:09:37.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.297 "is_configured": false, 00:09:37.297 "data_offset": 0, 00:09:37.297 "data_size": 63488 00:09:37.297 }, 00:09:37.297 { 00:09:37.297 "name": "pt2", 00:09:37.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.297 "is_configured": true, 00:09:37.297 "data_offset": 2048, 00:09:37.297 "data_size": 63488 00:09:37.297 } 00:09:37.297 ] 00:09:37.297 }' 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.297 16:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.866 [2024-11-08 16:51:07.151930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.866 [2024-11-08 16:51:07.151960] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.866 [2024-11-08 16:51:07.152043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.866 [2024-11-08 16:51:07.152091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.866 [2024-11-08 16:51:07.152100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:37.866 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.867 [2024-11-08 16:51:07.223824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.867 [2024-11-08 16:51:07.223875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.867 [2024-11-08 16:51:07.223893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:37.867 [2024-11-08 16:51:07.223902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.867 [2024-11-08 16:51:07.225988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.867 [2024-11-08 16:51:07.226024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.867 [2024-11-08 16:51:07.226099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.867 [2024-11-08 16:51:07.226131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.867 [2024-11-08 16:51:07.226224] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:37.867 [2024-11-08 16:51:07.226236] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.867 [2024-11-08 16:51:07.226443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:37.867 [2024-11-08 16:51:07.226554] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:37.867 [2024-11-08 16:51:07.226566] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:37.867 [2024-11-08 16:51:07.226686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.867 pt2 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.867 "name": "raid_bdev1", 00:09:37.867 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:37.867 "strip_size_kb": 0, 00:09:37.867 "state": "online", 00:09:37.867 "raid_level": "raid1", 00:09:37.867 "superblock": true, 00:09:37.867 "num_base_bdevs": 2, 00:09:37.867 "num_base_bdevs_discovered": 1, 00:09:37.867 "num_base_bdevs_operational": 1, 00:09:37.867 "base_bdevs_list": [ 00:09:37.867 { 00:09:37.867 "name": null, 00:09:37.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.867 "is_configured": false, 00:09:37.867 "data_offset": 2048, 00:09:37.867 "data_size": 63488 00:09:37.867 }, 00:09:37.867 { 00:09:37.867 "name": "pt2", 00:09:37.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.867 "is_configured": true, 00:09:37.867 "data_offset": 2048, 00:09:37.867 "data_size": 63488 00:09:37.867 } 00:09:37.867 ] 00:09:37.867 }' 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.867 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.437 [2024-11-08 16:51:07.699035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.437 [2024-11-08 16:51:07.699121] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.437 [2024-11-08 16:51:07.699246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.437 [2024-11-08 16:51:07.699313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.437 [2024-11-08 16:51:07.699380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.437 [2024-11-08 16:51:07.742902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.437 [2024-11-08 16:51:07.743003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.437 [2024-11-08 16:51:07.743044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:38.437 [2024-11-08 16:51:07.743079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.437 [2024-11-08 16:51:07.745330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.437 [2024-11-08 16:51:07.745410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.437 [2024-11-08 16:51:07.745508] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:38.437 [2024-11-08 16:51:07.745580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.437 [2024-11-08 16:51:07.745733] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:38.437 [2024-11-08 16:51:07.745795] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.437 [2024-11-08 16:51:07.745877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:38.437 [2024-11-08 16:51:07.745962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.437 [2024-11-08 16:51:07.746064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:38.437 [2024-11-08 16:51:07.746103] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.437 [2024-11-08 16:51:07.746341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:38.437 [2024-11-08 16:51:07.746491] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:38.437 [2024-11-08 16:51:07.746531] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:38.437 [2024-11-08 16:51:07.746694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.437 pt1 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.437 "name": "raid_bdev1", 00:09:38.437 "uuid": "a858aa77-954d-4f32-b8ac-9ae3d52697a0", 00:09:38.437 "strip_size_kb": 0, 00:09:38.437 "state": "online", 00:09:38.437 "raid_level": "raid1", 00:09:38.437 "superblock": true, 00:09:38.437 "num_base_bdevs": 2, 00:09:38.437 "num_base_bdevs_discovered": 1, 00:09:38.437 "num_base_bdevs_operational": 1, 00:09:38.437 "base_bdevs_list": [ 00:09:38.437 { 00:09:38.437 "name": null, 00:09:38.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.437 "is_configured": false, 00:09:38.437 "data_offset": 2048, 00:09:38.437 "data_size": 63488 00:09:38.437 }, 00:09:38.437 { 00:09:38.437 "name": "pt2", 00:09:38.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.437 "is_configured": true, 00:09:38.437 "data_offset": 2048, 00:09:38.437 "data_size": 63488 00:09:38.437 } 00:09:38.437 ] 00:09:38.437 }' 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.437 16:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:38.697 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.698 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.958 [2024-11-08 16:51:08.226346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a858aa77-954d-4f32-b8ac-9ae3d52697a0 '!=' a858aa77-954d-4f32-b8ac-9ae3d52697a0 ']' 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74461 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74461 ']' 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74461 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74461 00:09:38.958 killing process with pid 74461 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74461' 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74461 00:09:38.958 [2024-11-08 16:51:08.310165] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.958 [2024-11-08 16:51:08.310284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.958 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74461 00:09:38.958 [2024-11-08 16:51:08.310336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.958 [2024-11-08 16:51:08.310345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:38.958 [2024-11-08 16:51:08.334052] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.218 16:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:39.218 00:09:39.218 real 0m4.850s 00:09:39.218 user 0m7.912s 00:09:39.218 sys 0m0.964s 00:09:39.218 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.218 ************************************ 00:09:39.218 END TEST raid_superblock_test 00:09:39.218 ************************************ 00:09:39.218 16:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 16:51:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:39.218 16:51:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:39.218 16:51:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.218 16:51:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 ************************************ 00:09:39.218 START TEST raid_read_error_test 00:09:39.218 ************************************ 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZqAqbM58VT 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74779 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74779 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74779 ']' 00:09:39.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.218 16:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.479 [2024-11-08 16:51:08.744910] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:39.479 [2024-11-08 16:51:08.745055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74779 ] 00:09:39.479 [2024-11-08 16:51:08.886022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.479 [2024-11-08 16:51:08.930327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.479 [2024-11-08 16:51:08.972616] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.479 [2024-11-08 16:51:08.972653] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.049 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.049 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:40.049 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.049 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.049 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.049 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 BaseBdev1_malloc 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 true 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 [2024-11-08 16:51:09.602904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:40.318 [2024-11-08 16:51:09.602961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.318 [2024-11-08 16:51:09.602981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:40.318 [2024-11-08 16:51:09.602991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.318 [2024-11-08 16:51:09.605326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.318 [2024-11-08 16:51:09.605364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:40.318 BaseBdev1 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 BaseBdev2_malloc 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 true 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 [2024-11-08 16:51:09.642014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.318 [2024-11-08 16:51:09.642069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.318 [2024-11-08 16:51:09.642089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.318 [2024-11-08 16:51:09.642098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.318 [2024-11-08 16:51:09.644286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.318 [2024-11-08 16:51:09.644324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.318 BaseBdev2 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 [2024-11-08 16:51:09.654056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.318 [2024-11-08 16:51:09.656100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.318 [2024-11-08 16:51:09.656335] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:40.318 [2024-11-08 16:51:09.656353] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.318 [2024-11-08 16:51:09.656617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:40.318 [2024-11-08 16:51:09.656764] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:40.318 [2024-11-08 16:51:09.656778] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:40.318 [2024-11-08 16:51:09.656910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.318 "name": "raid_bdev1", 00:09:40.318 "uuid": "644c5c57-1c52-43e2-bb21-e8bb93824fd5", 00:09:40.318 "strip_size_kb": 0, 00:09:40.318 "state": "online", 00:09:40.318 "raid_level": "raid1", 00:09:40.318 "superblock": true, 00:09:40.318 "num_base_bdevs": 2, 00:09:40.318 "num_base_bdevs_discovered": 2, 00:09:40.318 "num_base_bdevs_operational": 2, 00:09:40.318 "base_bdevs_list": [ 00:09:40.318 { 00:09:40.318 "name": "BaseBdev1", 00:09:40.318 "uuid": "b59d8502-4a26-50d6-93c0-836b8ac18692", 00:09:40.318 "is_configured": true, 00:09:40.318 "data_offset": 2048, 00:09:40.318 "data_size": 63488 00:09:40.318 }, 00:09:40.318 { 00:09:40.318 "name": "BaseBdev2", 00:09:40.318 "uuid": "34fecdf9-5e21-5c6a-8870-aa36c47d2bf4", 00:09:40.318 "is_configured": true, 00:09:40.318 "data_offset": 2048, 00:09:40.318 "data_size": 63488 00:09:40.318 } 00:09:40.318 ] 00:09:40.318 }' 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.318 16:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.578 16:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:40.578 16:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:40.840 [2024-11-08 16:51:10.189490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.779 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.779 "name": "raid_bdev1", 00:09:41.779 "uuid": "644c5c57-1c52-43e2-bb21-e8bb93824fd5", 00:09:41.779 "strip_size_kb": 0, 00:09:41.779 "state": "online", 00:09:41.779 "raid_level": "raid1", 00:09:41.779 "superblock": true, 00:09:41.779 "num_base_bdevs": 2, 00:09:41.779 "num_base_bdevs_discovered": 2, 00:09:41.779 "num_base_bdevs_operational": 2, 00:09:41.779 "base_bdevs_list": [ 00:09:41.779 { 00:09:41.779 "name": "BaseBdev1", 00:09:41.779 "uuid": "b59d8502-4a26-50d6-93c0-836b8ac18692", 00:09:41.779 "is_configured": true, 00:09:41.779 "data_offset": 2048, 00:09:41.779 "data_size": 63488 00:09:41.779 }, 00:09:41.779 { 00:09:41.779 "name": "BaseBdev2", 00:09:41.779 "uuid": "34fecdf9-5e21-5c6a-8870-aa36c47d2bf4", 00:09:41.779 "is_configured": true, 00:09:41.780 "data_offset": 2048, 00:09:41.780 "data_size": 63488 00:09:41.780 } 00:09:41.780 ] 00:09:41.780 }' 00:09:41.780 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.780 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.350 [2024-11-08 16:51:11.589343] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.350 [2024-11-08 16:51:11.589440] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.350 [2024-11-08 16:51:11.592021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.350 [2024-11-08 16:51:11.592119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.350 [2024-11-08 16:51:11.592238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.350 [2024-11-08 16:51:11.592286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:42.350 { 00:09:42.350 "results": [ 00:09:42.350 { 00:09:42.350 "job": "raid_bdev1", 00:09:42.350 "core_mask": "0x1", 00:09:42.350 "workload": "randrw", 00:09:42.350 "percentage": 50, 00:09:42.350 "status": "finished", 00:09:42.350 "queue_depth": 1, 00:09:42.350 "io_size": 131072, 00:09:42.350 "runtime": 1.400725, 00:09:42.350 "iops": 19481.340020346604, 00:09:42.350 "mibps": 2435.1675025433256, 00:09:42.350 "io_failed": 0, 00:09:42.350 "io_timeout": 0, 00:09:42.350 "avg_latency_us": 48.78590227609366, 00:09:42.350 "min_latency_us": 21.799126637554586, 00:09:42.350 "max_latency_us": 1488.1537117903931 00:09:42.350 } 00:09:42.350 ], 00:09:42.350 "core_count": 1 00:09:42.350 } 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74779 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74779 ']' 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74779 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74779 00:09:42.350 killing process with pid 74779 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74779' 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74779 00:09:42.350 [2024-11-08 16:51:11.622255] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74779 00:09:42.350 [2024-11-08 16:51:11.638274] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZqAqbM58VT 00:09:42.350 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:42.610 ************************************ 00:09:42.610 END TEST raid_read_error_test 00:09:42.610 ************************************ 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:42.610 00:09:42.610 real 0m3.231s 00:09:42.610 user 0m4.112s 00:09:42.610 sys 0m0.502s 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.610 16:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.610 16:51:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:42.610 16:51:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:42.610 16:51:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.610 16:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.610 ************************************ 00:09:42.610 START TEST raid_write_error_test 00:09:42.610 ************************************ 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oIPX6jSsIl 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74908 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74908 00:09:42.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74908 ']' 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.610 16:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.610 [2024-11-08 16:51:12.048816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:42.610 [2024-11-08 16:51:12.048937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74908 ] 00:09:42.870 [2024-11-08 16:51:12.205231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.870 [2024-11-08 16:51:12.250302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.870 [2024-11-08 16:51:12.292973] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.870 [2024-11-08 16:51:12.293008] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.439 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.439 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:43.439 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.439 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.439 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 BaseBdev1_malloc 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 true 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 [2024-11-08 16:51:12.898942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.440 [2024-11-08 16:51:12.899050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.440 [2024-11-08 16:51:12.899090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.440 [2024-11-08 16:51:12.899118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.440 [2024-11-08 16:51:12.901264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.440 [2024-11-08 16:51:12.901336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.440 BaseBdev1 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 BaseBdev2_malloc 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 true 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 [2024-11-08 16:51:12.934988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.440 [2024-11-08 16:51:12.935083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.440 [2024-11-08 16:51:12.935141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.440 [2024-11-08 16:51:12.935173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.440 [2024-11-08 16:51:12.937264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.440 [2024-11-08 16:51:12.937335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.440 BaseBdev2 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.440 [2024-11-08 16:51:12.942999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.440 [2024-11-08 16:51:12.944856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.440 [2024-11-08 16:51:12.945064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:43.440 [2024-11-08 16:51:12.945114] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.440 [2024-11-08 16:51:12.945377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:43.440 [2024-11-08 16:51:12.945574] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:43.440 [2024-11-08 16:51:12.945592] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:43.440 [2024-11-08 16:51:12.945721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.440 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.698 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.698 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.698 "name": "raid_bdev1", 00:09:43.698 "uuid": "eab6a7c6-8dc8-4d20-af4e-9d317055ddfa", 00:09:43.698 "strip_size_kb": 0, 00:09:43.698 "state": "online", 00:09:43.698 "raid_level": "raid1", 00:09:43.698 "superblock": true, 00:09:43.698 "num_base_bdevs": 2, 00:09:43.698 "num_base_bdevs_discovered": 2, 00:09:43.698 "num_base_bdevs_operational": 2, 00:09:43.698 "base_bdevs_list": [ 00:09:43.698 { 00:09:43.698 "name": "BaseBdev1", 00:09:43.698 "uuid": "0efb31c7-1927-5ffa-96bb-b4dd0e1fd136", 00:09:43.698 "is_configured": true, 00:09:43.698 "data_offset": 2048, 00:09:43.698 "data_size": 63488 00:09:43.698 }, 00:09:43.698 { 00:09:43.698 "name": "BaseBdev2", 00:09:43.698 "uuid": "bf3526ed-a56a-5ba3-8bfb-5d5244412154", 00:09:43.698 "is_configured": true, 00:09:43.698 "data_offset": 2048, 00:09:43.698 "data_size": 63488 00:09:43.698 } 00:09:43.698 ] 00:09:43.698 }' 00:09:43.698 16:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.698 16:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.957 16:51:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:43.957 16:51:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.217 [2024-11-08 16:51:13.502428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.157 [2024-11-08 16:51:14.414631] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:45.157 [2024-11-08 16:51:14.414694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.157 [2024-11-08 16:51:14.414917] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.157 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.158 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.158 "name": "raid_bdev1", 00:09:45.158 "uuid": "eab6a7c6-8dc8-4d20-af4e-9d317055ddfa", 00:09:45.158 "strip_size_kb": 0, 00:09:45.158 "state": "online", 00:09:45.158 "raid_level": "raid1", 00:09:45.158 "superblock": true, 00:09:45.158 "num_base_bdevs": 2, 00:09:45.158 "num_base_bdevs_discovered": 1, 00:09:45.158 "num_base_bdevs_operational": 1, 00:09:45.158 "base_bdevs_list": [ 00:09:45.158 { 00:09:45.158 "name": null, 00:09:45.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.158 "is_configured": false, 00:09:45.158 "data_offset": 0, 00:09:45.158 "data_size": 63488 00:09:45.158 }, 00:09:45.158 { 00:09:45.158 "name": "BaseBdev2", 00:09:45.158 "uuid": "bf3526ed-a56a-5ba3-8bfb-5d5244412154", 00:09:45.158 "is_configured": true, 00:09:45.158 "data_offset": 2048, 00:09:45.158 "data_size": 63488 00:09:45.158 } 00:09:45.158 ] 00:09:45.158 }' 00:09:45.158 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.158 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.417 [2024-11-08 16:51:14.887550] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.417 [2024-11-08 16:51:14.887685] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.417 [2024-11-08 16:51:14.890173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.417 [2024-11-08 16:51:14.890257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.417 [2024-11-08 16:51:14.890326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.417 [2024-11-08 16:51:14.890368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:45.417 { 00:09:45.417 "results": [ 00:09:45.417 { 00:09:45.417 "job": "raid_bdev1", 00:09:45.417 "core_mask": "0x1", 00:09:45.417 "workload": "randrw", 00:09:45.417 "percentage": 50, 00:09:45.417 "status": "finished", 00:09:45.417 "queue_depth": 1, 00:09:45.417 "io_size": 131072, 00:09:45.417 "runtime": 1.38589, 00:09:45.417 "iops": 23002.547099697666, 00:09:45.417 "mibps": 2875.318387462208, 00:09:45.417 "io_failed": 0, 00:09:45.417 "io_timeout": 0, 00:09:45.417 "avg_latency_us": 40.91685484866289, 00:09:45.417 "min_latency_us": 21.910917030567685, 00:09:45.417 "max_latency_us": 1395.1441048034935 00:09:45.417 } 00:09:45.417 ], 00:09:45.417 "core_count": 1 00:09:45.417 } 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74908 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74908 ']' 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74908 00:09:45.417 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74908 00:09:45.418 killing process with pid 74908 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74908' 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74908 00:09:45.418 [2024-11-08 16:51:14.929156] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.418 16:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74908 00:09:45.677 [2024-11-08 16:51:14.944933] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oIPX6jSsIl 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:45.677 00:09:45.677 real 0m3.243s 00:09:45.677 user 0m4.152s 00:09:45.677 sys 0m0.498s 00:09:45.677 ************************************ 00:09:45.677 END TEST raid_write_error_test 00:09:45.677 ************************************ 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.677 16:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.937 16:51:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:45.937 16:51:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:45.937 16:51:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:45.937 16:51:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.937 16:51:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.937 16:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.937 ************************************ 00:09:45.937 START TEST raid_state_function_test 00:09:45.937 ************************************ 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75041 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75041' 00:09:45.937 Process raid pid: 75041 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75041 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75041 ']' 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.937 16:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.937 [2024-11-08 16:51:15.348731] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:45.937 [2024-11-08 16:51:15.348931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.198 [2024-11-08 16:51:15.490966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.198 [2024-11-08 16:51:15.536759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.198 [2024-11-08 16:51:15.578265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.198 [2024-11-08 16:51:15.578388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.767 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.767 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.767 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.767 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.767 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.767 [2024-11-08 16:51:16.195471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.767 [2024-11-08 16:51:16.195530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.767 [2024-11-08 16:51:16.195544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.768 [2024-11-08 16:51:16.195554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.768 [2024-11-08 16:51:16.195561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.768 [2024-11-08 16:51:16.195573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.768 "name": "Existed_Raid", 00:09:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.768 "strip_size_kb": 64, 00:09:46.768 "state": "configuring", 00:09:46.768 "raid_level": "raid0", 00:09:46.768 "superblock": false, 00:09:46.768 "num_base_bdevs": 3, 00:09:46.768 "num_base_bdevs_discovered": 0, 00:09:46.768 "num_base_bdevs_operational": 3, 00:09:46.768 "base_bdevs_list": [ 00:09:46.768 { 00:09:46.768 "name": "BaseBdev1", 00:09:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.768 "is_configured": false, 00:09:46.768 "data_offset": 0, 00:09:46.768 "data_size": 0 00:09:46.768 }, 00:09:46.768 { 00:09:46.768 "name": "BaseBdev2", 00:09:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.768 "is_configured": false, 00:09:46.768 "data_offset": 0, 00:09:46.768 "data_size": 0 00:09:46.768 }, 00:09:46.768 { 00:09:46.768 "name": "BaseBdev3", 00:09:46.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.768 "is_configured": false, 00:09:46.768 "data_offset": 0, 00:09:46.768 "data_size": 0 00:09:46.768 } 00:09:46.768 ] 00:09:46.768 }' 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.768 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 [2024-11-08 16:51:16.638665] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.337 [2024-11-08 16:51:16.638773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 [2024-11-08 16:51:16.650676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.337 [2024-11-08 16:51:16.650761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.337 [2024-11-08 16:51:16.650793] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.337 [2024-11-08 16:51:16.650842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.337 [2024-11-08 16:51:16.650874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.337 [2024-11-08 16:51:16.650896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 [2024-11-08 16:51:16.671942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.337 BaseBdev1 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.337 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 [ 00:09:47.337 { 00:09:47.337 "name": "BaseBdev1", 00:09:47.337 "aliases": [ 00:09:47.337 "d36c50d4-17ee-4d07-973d-733eefb129a0" 00:09:47.337 ], 00:09:47.337 "product_name": "Malloc disk", 00:09:47.337 "block_size": 512, 00:09:47.337 "num_blocks": 65536, 00:09:47.337 "uuid": "d36c50d4-17ee-4d07-973d-733eefb129a0", 00:09:47.337 "assigned_rate_limits": { 00:09:47.337 "rw_ios_per_sec": 0, 00:09:47.337 "rw_mbytes_per_sec": 0, 00:09:47.337 "r_mbytes_per_sec": 0, 00:09:47.337 "w_mbytes_per_sec": 0 00:09:47.337 }, 00:09:47.337 "claimed": true, 00:09:47.337 "claim_type": "exclusive_write", 00:09:47.337 "zoned": false, 00:09:47.337 "supported_io_types": { 00:09:47.337 "read": true, 00:09:47.337 "write": true, 00:09:47.337 "unmap": true, 00:09:47.337 "flush": true, 00:09:47.337 "reset": true, 00:09:47.337 "nvme_admin": false, 00:09:47.337 "nvme_io": false, 00:09:47.337 "nvme_io_md": false, 00:09:47.337 "write_zeroes": true, 00:09:47.337 "zcopy": true, 00:09:47.337 "get_zone_info": false, 00:09:47.338 "zone_management": false, 00:09:47.338 "zone_append": false, 00:09:47.338 "compare": false, 00:09:47.338 "compare_and_write": false, 00:09:47.338 "abort": true, 00:09:47.338 "seek_hole": false, 00:09:47.338 "seek_data": false, 00:09:47.338 "copy": true, 00:09:47.338 "nvme_iov_md": false 00:09:47.338 }, 00:09:47.338 "memory_domains": [ 00:09:47.338 { 00:09:47.338 "dma_device_id": "system", 00:09:47.338 "dma_device_type": 1 00:09:47.338 }, 00:09:47.338 { 00:09:47.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.338 "dma_device_type": 2 00:09:47.338 } 00:09:47.338 ], 00:09:47.338 "driver_specific": {} 00:09:47.338 } 00:09:47.338 ] 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.338 "name": "Existed_Raid", 00:09:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.338 "strip_size_kb": 64, 00:09:47.338 "state": "configuring", 00:09:47.338 "raid_level": "raid0", 00:09:47.338 "superblock": false, 00:09:47.338 "num_base_bdevs": 3, 00:09:47.338 "num_base_bdevs_discovered": 1, 00:09:47.338 "num_base_bdevs_operational": 3, 00:09:47.338 "base_bdevs_list": [ 00:09:47.338 { 00:09:47.338 "name": "BaseBdev1", 00:09:47.338 "uuid": "d36c50d4-17ee-4d07-973d-733eefb129a0", 00:09:47.338 "is_configured": true, 00:09:47.338 "data_offset": 0, 00:09:47.338 "data_size": 65536 00:09:47.338 }, 00:09:47.338 { 00:09:47.338 "name": "BaseBdev2", 00:09:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.338 "is_configured": false, 00:09:47.338 "data_offset": 0, 00:09:47.338 "data_size": 0 00:09:47.338 }, 00:09:47.338 { 00:09:47.338 "name": "BaseBdev3", 00:09:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.338 "is_configured": false, 00:09:47.338 "data_offset": 0, 00:09:47.338 "data_size": 0 00:09:47.338 } 00:09:47.338 ] 00:09:47.338 }' 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.338 16:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 [2024-11-08 16:51:17.159207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.907 [2024-11-08 16:51:17.159265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 [2024-11-08 16:51:17.171211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.907 [2024-11-08 16:51:17.173135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.907 [2024-11-08 16:51:17.173179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.907 [2024-11-08 16:51:17.173205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.907 [2024-11-08 16:51:17.173216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.907 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.907 "name": "Existed_Raid", 00:09:47.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.907 "strip_size_kb": 64, 00:09:47.907 "state": "configuring", 00:09:47.907 "raid_level": "raid0", 00:09:47.907 "superblock": false, 00:09:47.907 "num_base_bdevs": 3, 00:09:47.907 "num_base_bdevs_discovered": 1, 00:09:47.907 "num_base_bdevs_operational": 3, 00:09:47.907 "base_bdevs_list": [ 00:09:47.907 { 00:09:47.907 "name": "BaseBdev1", 00:09:47.907 "uuid": "d36c50d4-17ee-4d07-973d-733eefb129a0", 00:09:47.907 "is_configured": true, 00:09:47.907 "data_offset": 0, 00:09:47.907 "data_size": 65536 00:09:47.907 }, 00:09:47.907 { 00:09:47.907 "name": "BaseBdev2", 00:09:47.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.907 "is_configured": false, 00:09:47.907 "data_offset": 0, 00:09:47.907 "data_size": 0 00:09:47.907 }, 00:09:47.907 { 00:09:47.907 "name": "BaseBdev3", 00:09:47.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.907 "is_configured": false, 00:09:47.907 "data_offset": 0, 00:09:47.907 "data_size": 0 00:09:47.907 } 00:09:47.907 ] 00:09:47.907 }' 00:09:47.908 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.908 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 [2024-11-08 16:51:17.618253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.168 BaseBdev2 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 [ 00:09:48.168 { 00:09:48.168 "name": "BaseBdev2", 00:09:48.168 "aliases": [ 00:09:48.168 "867bf698-69f4-44b2-91b7-236df28ab0db" 00:09:48.168 ], 00:09:48.168 "product_name": "Malloc disk", 00:09:48.168 "block_size": 512, 00:09:48.168 "num_blocks": 65536, 00:09:48.168 "uuid": "867bf698-69f4-44b2-91b7-236df28ab0db", 00:09:48.168 "assigned_rate_limits": { 00:09:48.168 "rw_ios_per_sec": 0, 00:09:48.168 "rw_mbytes_per_sec": 0, 00:09:48.168 "r_mbytes_per_sec": 0, 00:09:48.168 "w_mbytes_per_sec": 0 00:09:48.168 }, 00:09:48.168 "claimed": true, 00:09:48.168 "claim_type": "exclusive_write", 00:09:48.168 "zoned": false, 00:09:48.168 "supported_io_types": { 00:09:48.168 "read": true, 00:09:48.168 "write": true, 00:09:48.168 "unmap": true, 00:09:48.168 "flush": true, 00:09:48.168 "reset": true, 00:09:48.168 "nvme_admin": false, 00:09:48.168 "nvme_io": false, 00:09:48.168 "nvme_io_md": false, 00:09:48.168 "write_zeroes": true, 00:09:48.168 "zcopy": true, 00:09:48.168 "get_zone_info": false, 00:09:48.168 "zone_management": false, 00:09:48.168 "zone_append": false, 00:09:48.168 "compare": false, 00:09:48.168 "compare_and_write": false, 00:09:48.168 "abort": true, 00:09:48.168 "seek_hole": false, 00:09:48.168 "seek_data": false, 00:09:48.168 "copy": true, 00:09:48.168 "nvme_iov_md": false 00:09:48.168 }, 00:09:48.168 "memory_domains": [ 00:09:48.168 { 00:09:48.168 "dma_device_id": "system", 00:09:48.168 "dma_device_type": 1 00:09:48.168 }, 00:09:48.168 { 00:09:48.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.168 "dma_device_type": 2 00:09:48.168 } 00:09:48.168 ], 00:09:48.168 "driver_specific": {} 00:09:48.168 } 00:09:48.168 ] 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.168 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.428 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.428 "name": "Existed_Raid", 00:09:48.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.428 "strip_size_kb": 64, 00:09:48.428 "state": "configuring", 00:09:48.428 "raid_level": "raid0", 00:09:48.428 "superblock": false, 00:09:48.428 "num_base_bdevs": 3, 00:09:48.428 "num_base_bdevs_discovered": 2, 00:09:48.428 "num_base_bdevs_operational": 3, 00:09:48.428 "base_bdevs_list": [ 00:09:48.428 { 00:09:48.428 "name": "BaseBdev1", 00:09:48.428 "uuid": "d36c50d4-17ee-4d07-973d-733eefb129a0", 00:09:48.428 "is_configured": true, 00:09:48.428 "data_offset": 0, 00:09:48.428 "data_size": 65536 00:09:48.428 }, 00:09:48.428 { 00:09:48.428 "name": "BaseBdev2", 00:09:48.428 "uuid": "867bf698-69f4-44b2-91b7-236df28ab0db", 00:09:48.428 "is_configured": true, 00:09:48.428 "data_offset": 0, 00:09:48.428 "data_size": 65536 00:09:48.428 }, 00:09:48.428 { 00:09:48.428 "name": "BaseBdev3", 00:09:48.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.428 "is_configured": false, 00:09:48.428 "data_offset": 0, 00:09:48.428 "data_size": 0 00:09:48.428 } 00:09:48.428 ] 00:09:48.428 }' 00:09:48.428 16:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.428 16:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.687 [2024-11-08 16:51:18.084714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.687 [2024-11-08 16:51:18.084762] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:48.687 [2024-11-08 16:51:18.084773] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:48.687 [2024-11-08 16:51:18.085080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:48.687 [2024-11-08 16:51:18.085245] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:48.687 [2024-11-08 16:51:18.085257] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:48.687 [2024-11-08 16:51:18.085480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.687 BaseBdev3 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.687 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.688 [ 00:09:48.688 { 00:09:48.688 "name": "BaseBdev3", 00:09:48.688 "aliases": [ 00:09:48.688 "a73c2945-ffe2-4ef2-af03-7e8558d971cc" 00:09:48.688 ], 00:09:48.688 "product_name": "Malloc disk", 00:09:48.688 "block_size": 512, 00:09:48.688 "num_blocks": 65536, 00:09:48.688 "uuid": "a73c2945-ffe2-4ef2-af03-7e8558d971cc", 00:09:48.688 "assigned_rate_limits": { 00:09:48.688 "rw_ios_per_sec": 0, 00:09:48.688 "rw_mbytes_per_sec": 0, 00:09:48.688 "r_mbytes_per_sec": 0, 00:09:48.688 "w_mbytes_per_sec": 0 00:09:48.688 }, 00:09:48.688 "claimed": true, 00:09:48.688 "claim_type": "exclusive_write", 00:09:48.688 "zoned": false, 00:09:48.688 "supported_io_types": { 00:09:48.688 "read": true, 00:09:48.688 "write": true, 00:09:48.688 "unmap": true, 00:09:48.688 "flush": true, 00:09:48.688 "reset": true, 00:09:48.688 "nvme_admin": false, 00:09:48.688 "nvme_io": false, 00:09:48.688 "nvme_io_md": false, 00:09:48.688 "write_zeroes": true, 00:09:48.688 "zcopy": true, 00:09:48.688 "get_zone_info": false, 00:09:48.688 "zone_management": false, 00:09:48.688 "zone_append": false, 00:09:48.688 "compare": false, 00:09:48.688 "compare_and_write": false, 00:09:48.688 "abort": true, 00:09:48.688 "seek_hole": false, 00:09:48.688 "seek_data": false, 00:09:48.688 "copy": true, 00:09:48.688 "nvme_iov_md": false 00:09:48.688 }, 00:09:48.688 "memory_domains": [ 00:09:48.688 { 00:09:48.688 "dma_device_id": "system", 00:09:48.688 "dma_device_type": 1 00:09:48.688 }, 00:09:48.688 { 00:09:48.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.688 "dma_device_type": 2 00:09:48.688 } 00:09:48.688 ], 00:09:48.688 "driver_specific": {} 00:09:48.688 } 00:09:48.688 ] 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.688 "name": "Existed_Raid", 00:09:48.688 "uuid": "ca499984-3abf-4dbd-a793-3b65888d1b27", 00:09:48.688 "strip_size_kb": 64, 00:09:48.688 "state": "online", 00:09:48.688 "raid_level": "raid0", 00:09:48.688 "superblock": false, 00:09:48.688 "num_base_bdevs": 3, 00:09:48.688 "num_base_bdevs_discovered": 3, 00:09:48.688 "num_base_bdevs_operational": 3, 00:09:48.688 "base_bdevs_list": [ 00:09:48.688 { 00:09:48.688 "name": "BaseBdev1", 00:09:48.688 "uuid": "d36c50d4-17ee-4d07-973d-733eefb129a0", 00:09:48.688 "is_configured": true, 00:09:48.688 "data_offset": 0, 00:09:48.688 "data_size": 65536 00:09:48.688 }, 00:09:48.688 { 00:09:48.688 "name": "BaseBdev2", 00:09:48.688 "uuid": "867bf698-69f4-44b2-91b7-236df28ab0db", 00:09:48.688 "is_configured": true, 00:09:48.688 "data_offset": 0, 00:09:48.688 "data_size": 65536 00:09:48.688 }, 00:09:48.688 { 00:09:48.688 "name": "BaseBdev3", 00:09:48.688 "uuid": "a73c2945-ffe2-4ef2-af03-7e8558d971cc", 00:09:48.688 "is_configured": true, 00:09:48.688 "data_offset": 0, 00:09:48.688 "data_size": 65536 00:09:48.688 } 00:09:48.688 ] 00:09:48.688 }' 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.688 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 [2024-11-08 16:51:18.484436] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.257 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.257 "name": "Existed_Raid", 00:09:49.257 "aliases": [ 00:09:49.257 "ca499984-3abf-4dbd-a793-3b65888d1b27" 00:09:49.257 ], 00:09:49.257 "product_name": "Raid Volume", 00:09:49.257 "block_size": 512, 00:09:49.257 "num_blocks": 196608, 00:09:49.258 "uuid": "ca499984-3abf-4dbd-a793-3b65888d1b27", 00:09:49.258 "assigned_rate_limits": { 00:09:49.258 "rw_ios_per_sec": 0, 00:09:49.258 "rw_mbytes_per_sec": 0, 00:09:49.258 "r_mbytes_per_sec": 0, 00:09:49.258 "w_mbytes_per_sec": 0 00:09:49.258 }, 00:09:49.258 "claimed": false, 00:09:49.258 "zoned": false, 00:09:49.258 "supported_io_types": { 00:09:49.258 "read": true, 00:09:49.258 "write": true, 00:09:49.258 "unmap": true, 00:09:49.258 "flush": true, 00:09:49.258 "reset": true, 00:09:49.258 "nvme_admin": false, 00:09:49.258 "nvme_io": false, 00:09:49.258 "nvme_io_md": false, 00:09:49.258 "write_zeroes": true, 00:09:49.258 "zcopy": false, 00:09:49.258 "get_zone_info": false, 00:09:49.258 "zone_management": false, 00:09:49.258 "zone_append": false, 00:09:49.258 "compare": false, 00:09:49.258 "compare_and_write": false, 00:09:49.258 "abort": false, 00:09:49.258 "seek_hole": false, 00:09:49.258 "seek_data": false, 00:09:49.258 "copy": false, 00:09:49.258 "nvme_iov_md": false 00:09:49.258 }, 00:09:49.258 "memory_domains": [ 00:09:49.258 { 00:09:49.258 "dma_device_id": "system", 00:09:49.258 "dma_device_type": 1 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.258 "dma_device_type": 2 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "dma_device_id": "system", 00:09:49.258 "dma_device_type": 1 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.258 "dma_device_type": 2 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "dma_device_id": "system", 00:09:49.258 "dma_device_type": 1 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.258 "dma_device_type": 2 00:09:49.258 } 00:09:49.258 ], 00:09:49.258 "driver_specific": { 00:09:49.258 "raid": { 00:09:49.258 "uuid": "ca499984-3abf-4dbd-a793-3b65888d1b27", 00:09:49.258 "strip_size_kb": 64, 00:09:49.258 "state": "online", 00:09:49.258 "raid_level": "raid0", 00:09:49.258 "superblock": false, 00:09:49.258 "num_base_bdevs": 3, 00:09:49.258 "num_base_bdevs_discovered": 3, 00:09:49.258 "num_base_bdevs_operational": 3, 00:09:49.258 "base_bdevs_list": [ 00:09:49.258 { 00:09:49.258 "name": "BaseBdev1", 00:09:49.258 "uuid": "d36c50d4-17ee-4d07-973d-733eefb129a0", 00:09:49.258 "is_configured": true, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "name": "BaseBdev2", 00:09:49.258 "uuid": "867bf698-69f4-44b2-91b7-236df28ab0db", 00:09:49.258 "is_configured": true, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "name": "BaseBdev3", 00:09:49.258 "uuid": "a73c2945-ffe2-4ef2-af03-7e8558d971cc", 00:09:49.258 "is_configured": true, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.258 } 00:09:49.258 ] 00:09:49.258 } 00:09:49.258 } 00:09:49.258 }' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:49.258 BaseBdev2 00:09:49.258 BaseBdev3' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 [2024-11-08 16:51:18.735743] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.258 [2024-11-08 16:51:18.735825] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.258 [2024-11-08 16:51:18.735889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.258 "name": "Existed_Raid", 00:09:49.258 "uuid": "ca499984-3abf-4dbd-a793-3b65888d1b27", 00:09:49.258 "strip_size_kb": 64, 00:09:49.258 "state": "offline", 00:09:49.258 "raid_level": "raid0", 00:09:49.258 "superblock": false, 00:09:49.258 "num_base_bdevs": 3, 00:09:49.258 "num_base_bdevs_discovered": 2, 00:09:49.258 "num_base_bdevs_operational": 2, 00:09:49.258 "base_bdevs_list": [ 00:09:49.258 { 00:09:49.258 "name": null, 00:09:49.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.258 "is_configured": false, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "name": "BaseBdev2", 00:09:49.258 "uuid": "867bf698-69f4-44b2-91b7-236df28ab0db", 00:09:49.258 "is_configured": true, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "name": "BaseBdev3", 00:09:49.258 "uuid": "a73c2945-ffe2-4ef2-af03-7e8558d971cc", 00:09:49.258 "is_configured": true, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.259 } 00:09:49.259 ] 00:09:49.259 }' 00:09:49.259 16:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.259 16:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.828 [2024-11-08 16:51:19.190446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.828 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 [2024-11-08 16:51:19.245689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.829 [2024-11-08 16:51:19.245739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 BaseBdev2 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.829 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.829 [ 00:09:49.829 { 00:09:49.829 "name": "BaseBdev2", 00:09:49.829 "aliases": [ 00:09:49.829 "72e0a4c7-812d-489f-817f-e85cf7a0db33" 00:09:49.829 ], 00:09:49.829 "product_name": "Malloc disk", 00:09:49.829 "block_size": 512, 00:09:49.829 "num_blocks": 65536, 00:09:49.829 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:49.829 "assigned_rate_limits": { 00:09:49.829 "rw_ios_per_sec": 0, 00:09:49.829 "rw_mbytes_per_sec": 0, 00:09:49.829 "r_mbytes_per_sec": 0, 00:09:49.829 "w_mbytes_per_sec": 0 00:09:49.829 }, 00:09:49.829 "claimed": false, 00:09:49.829 "zoned": false, 00:09:49.829 "supported_io_types": { 00:09:49.829 "read": true, 00:09:49.829 "write": true, 00:09:49.829 "unmap": true, 00:09:49.829 "flush": true, 00:09:49.829 "reset": true, 00:09:49.829 "nvme_admin": false, 00:09:49.829 "nvme_io": false, 00:09:49.829 "nvme_io_md": false, 00:09:49.829 "write_zeroes": true, 00:09:49.829 "zcopy": true, 00:09:49.829 "get_zone_info": false, 00:09:49.829 "zone_management": false, 00:09:49.829 "zone_append": false, 00:09:49.829 "compare": false, 00:09:49.829 "compare_and_write": false, 00:09:49.829 "abort": true, 00:09:49.829 "seek_hole": false, 00:09:49.829 "seek_data": false, 00:09:49.829 "copy": true, 00:09:49.829 "nvme_iov_md": false 00:09:50.089 }, 00:09:50.089 "memory_domains": [ 00:09:50.089 { 00:09:50.089 "dma_device_id": "system", 00:09:50.089 "dma_device_type": 1 00:09:50.089 }, 00:09:50.089 { 00:09:50.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.089 "dma_device_type": 2 00:09:50.089 } 00:09:50.089 ], 00:09:50.089 "driver_specific": {} 00:09:50.089 } 00:09:50.089 ] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.089 BaseBdev3 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.089 [ 00:09:50.089 { 00:09:50.089 "name": "BaseBdev3", 00:09:50.089 "aliases": [ 00:09:50.089 "06641a1c-fa79-4391-833b-e2f5cbca0243" 00:09:50.089 ], 00:09:50.089 "product_name": "Malloc disk", 00:09:50.089 "block_size": 512, 00:09:50.089 "num_blocks": 65536, 00:09:50.089 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:50.089 "assigned_rate_limits": { 00:09:50.089 "rw_ios_per_sec": 0, 00:09:50.089 "rw_mbytes_per_sec": 0, 00:09:50.089 "r_mbytes_per_sec": 0, 00:09:50.089 "w_mbytes_per_sec": 0 00:09:50.089 }, 00:09:50.089 "claimed": false, 00:09:50.089 "zoned": false, 00:09:50.089 "supported_io_types": { 00:09:50.089 "read": true, 00:09:50.089 "write": true, 00:09:50.089 "unmap": true, 00:09:50.089 "flush": true, 00:09:50.089 "reset": true, 00:09:50.089 "nvme_admin": false, 00:09:50.089 "nvme_io": false, 00:09:50.089 "nvme_io_md": false, 00:09:50.089 "write_zeroes": true, 00:09:50.089 "zcopy": true, 00:09:50.089 "get_zone_info": false, 00:09:50.089 "zone_management": false, 00:09:50.089 "zone_append": false, 00:09:50.089 "compare": false, 00:09:50.089 "compare_and_write": false, 00:09:50.089 "abort": true, 00:09:50.089 "seek_hole": false, 00:09:50.089 "seek_data": false, 00:09:50.089 "copy": true, 00:09:50.089 "nvme_iov_md": false 00:09:50.089 }, 00:09:50.089 "memory_domains": [ 00:09:50.089 { 00:09:50.089 "dma_device_id": "system", 00:09:50.089 "dma_device_type": 1 00:09:50.089 }, 00:09:50.089 { 00:09:50.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.089 "dma_device_type": 2 00:09:50.089 } 00:09:50.089 ], 00:09:50.089 "driver_specific": {} 00:09:50.089 } 00:09:50.089 ] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.089 [2024-11-08 16:51:19.418061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.089 [2024-11-08 16:51:19.418148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.089 [2024-11-08 16:51:19.418189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.089 [2024-11-08 16:51:19.420026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.089 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.090 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.090 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.090 "name": "Existed_Raid", 00:09:50.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.090 "strip_size_kb": 64, 00:09:50.090 "state": "configuring", 00:09:50.090 "raid_level": "raid0", 00:09:50.090 "superblock": false, 00:09:50.090 "num_base_bdevs": 3, 00:09:50.090 "num_base_bdevs_discovered": 2, 00:09:50.090 "num_base_bdevs_operational": 3, 00:09:50.090 "base_bdevs_list": [ 00:09:50.090 { 00:09:50.090 "name": "BaseBdev1", 00:09:50.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.090 "is_configured": false, 00:09:50.090 "data_offset": 0, 00:09:50.090 "data_size": 0 00:09:50.090 }, 00:09:50.090 { 00:09:50.090 "name": "BaseBdev2", 00:09:50.090 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:50.090 "is_configured": true, 00:09:50.090 "data_offset": 0, 00:09:50.090 "data_size": 65536 00:09:50.090 }, 00:09:50.090 { 00:09:50.090 "name": "BaseBdev3", 00:09:50.090 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:50.090 "is_configured": true, 00:09:50.090 "data_offset": 0, 00:09:50.090 "data_size": 65536 00:09:50.090 } 00:09:50.090 ] 00:09:50.090 }' 00:09:50.090 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.090 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.349 [2024-11-08 16:51:19.837349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.349 "name": "Existed_Raid", 00:09:50.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.349 "strip_size_kb": 64, 00:09:50.349 "state": "configuring", 00:09:50.349 "raid_level": "raid0", 00:09:50.349 "superblock": false, 00:09:50.349 "num_base_bdevs": 3, 00:09:50.349 "num_base_bdevs_discovered": 1, 00:09:50.349 "num_base_bdevs_operational": 3, 00:09:50.349 "base_bdevs_list": [ 00:09:50.349 { 00:09:50.349 "name": "BaseBdev1", 00:09:50.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.349 "is_configured": false, 00:09:50.349 "data_offset": 0, 00:09:50.349 "data_size": 0 00:09:50.349 }, 00:09:50.349 { 00:09:50.349 "name": null, 00:09:50.349 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:50.349 "is_configured": false, 00:09:50.349 "data_offset": 0, 00:09:50.349 "data_size": 65536 00:09:50.349 }, 00:09:50.349 { 00:09:50.349 "name": "BaseBdev3", 00:09:50.349 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:50.349 "is_configured": true, 00:09:50.349 "data_offset": 0, 00:09:50.349 "data_size": 65536 00:09:50.349 } 00:09:50.349 ] 00:09:50.349 }' 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.349 16:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.919 [2024-11-08 16:51:20.291484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.919 BaseBdev1 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.919 [ 00:09:50.919 { 00:09:50.919 "name": "BaseBdev1", 00:09:50.919 "aliases": [ 00:09:50.919 "8b081e24-90d4-4c04-9ee5-9fc549b4f262" 00:09:50.919 ], 00:09:50.919 "product_name": "Malloc disk", 00:09:50.919 "block_size": 512, 00:09:50.919 "num_blocks": 65536, 00:09:50.919 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:50.919 "assigned_rate_limits": { 00:09:50.919 "rw_ios_per_sec": 0, 00:09:50.919 "rw_mbytes_per_sec": 0, 00:09:50.919 "r_mbytes_per_sec": 0, 00:09:50.919 "w_mbytes_per_sec": 0 00:09:50.919 }, 00:09:50.919 "claimed": true, 00:09:50.919 "claim_type": "exclusive_write", 00:09:50.919 "zoned": false, 00:09:50.919 "supported_io_types": { 00:09:50.919 "read": true, 00:09:50.919 "write": true, 00:09:50.919 "unmap": true, 00:09:50.919 "flush": true, 00:09:50.919 "reset": true, 00:09:50.919 "nvme_admin": false, 00:09:50.919 "nvme_io": false, 00:09:50.919 "nvme_io_md": false, 00:09:50.919 "write_zeroes": true, 00:09:50.919 "zcopy": true, 00:09:50.919 "get_zone_info": false, 00:09:50.919 "zone_management": false, 00:09:50.919 "zone_append": false, 00:09:50.919 "compare": false, 00:09:50.919 "compare_and_write": false, 00:09:50.919 "abort": true, 00:09:50.919 "seek_hole": false, 00:09:50.919 "seek_data": false, 00:09:50.919 "copy": true, 00:09:50.919 "nvme_iov_md": false 00:09:50.919 }, 00:09:50.919 "memory_domains": [ 00:09:50.919 { 00:09:50.919 "dma_device_id": "system", 00:09:50.919 "dma_device_type": 1 00:09:50.919 }, 00:09:50.919 { 00:09:50.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.919 "dma_device_type": 2 00:09:50.919 } 00:09:50.919 ], 00:09:50.919 "driver_specific": {} 00:09:50.919 } 00:09:50.919 ] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.919 "name": "Existed_Raid", 00:09:50.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.919 "strip_size_kb": 64, 00:09:50.919 "state": "configuring", 00:09:50.919 "raid_level": "raid0", 00:09:50.919 "superblock": false, 00:09:50.919 "num_base_bdevs": 3, 00:09:50.919 "num_base_bdevs_discovered": 2, 00:09:50.919 "num_base_bdevs_operational": 3, 00:09:50.919 "base_bdevs_list": [ 00:09:50.919 { 00:09:50.919 "name": "BaseBdev1", 00:09:50.919 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:50.919 "is_configured": true, 00:09:50.919 "data_offset": 0, 00:09:50.919 "data_size": 65536 00:09:50.919 }, 00:09:50.919 { 00:09:50.919 "name": null, 00:09:50.919 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:50.919 "is_configured": false, 00:09:50.919 "data_offset": 0, 00:09:50.919 "data_size": 65536 00:09:50.919 }, 00:09:50.919 { 00:09:50.919 "name": "BaseBdev3", 00:09:50.919 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:50.919 "is_configured": true, 00:09:50.919 "data_offset": 0, 00:09:50.919 "data_size": 65536 00:09:50.919 } 00:09:50.919 ] 00:09:50.919 }' 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.919 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 [2024-11-08 16:51:20.762746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.489 "name": "Existed_Raid", 00:09:51.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.489 "strip_size_kb": 64, 00:09:51.489 "state": "configuring", 00:09:51.489 "raid_level": "raid0", 00:09:51.489 "superblock": false, 00:09:51.489 "num_base_bdevs": 3, 00:09:51.489 "num_base_bdevs_discovered": 1, 00:09:51.489 "num_base_bdevs_operational": 3, 00:09:51.489 "base_bdevs_list": [ 00:09:51.489 { 00:09:51.489 "name": "BaseBdev1", 00:09:51.489 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:51.489 "is_configured": true, 00:09:51.489 "data_offset": 0, 00:09:51.489 "data_size": 65536 00:09:51.489 }, 00:09:51.489 { 00:09:51.489 "name": null, 00:09:51.489 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:51.489 "is_configured": false, 00:09:51.489 "data_offset": 0, 00:09:51.489 "data_size": 65536 00:09:51.489 }, 00:09:51.489 { 00:09:51.489 "name": null, 00:09:51.489 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:51.489 "is_configured": false, 00:09:51.489 "data_offset": 0, 00:09:51.489 "data_size": 65536 00:09:51.489 } 00:09:51.489 ] 00:09:51.489 }' 00:09:51.489 16:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.490 16:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.750 [2024-11-08 16:51:21.174091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.750 "name": "Existed_Raid", 00:09:51.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.750 "strip_size_kb": 64, 00:09:51.750 "state": "configuring", 00:09:51.750 "raid_level": "raid0", 00:09:51.750 "superblock": false, 00:09:51.750 "num_base_bdevs": 3, 00:09:51.750 "num_base_bdevs_discovered": 2, 00:09:51.750 "num_base_bdevs_operational": 3, 00:09:51.750 "base_bdevs_list": [ 00:09:51.750 { 00:09:51.750 "name": "BaseBdev1", 00:09:51.750 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:51.750 "is_configured": true, 00:09:51.750 "data_offset": 0, 00:09:51.750 "data_size": 65536 00:09:51.750 }, 00:09:51.750 { 00:09:51.750 "name": null, 00:09:51.750 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:51.750 "is_configured": false, 00:09:51.750 "data_offset": 0, 00:09:51.750 "data_size": 65536 00:09:51.750 }, 00:09:51.750 { 00:09:51.750 "name": "BaseBdev3", 00:09:51.750 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:51.750 "is_configured": true, 00:09:51.750 "data_offset": 0, 00:09:51.750 "data_size": 65536 00:09:51.750 } 00:09:51.750 ] 00:09:51.750 }' 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.750 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.333 [2024-11-08 16:51:21.665280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.333 "name": "Existed_Raid", 00:09:52.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.333 "strip_size_kb": 64, 00:09:52.333 "state": "configuring", 00:09:52.333 "raid_level": "raid0", 00:09:52.333 "superblock": false, 00:09:52.333 "num_base_bdevs": 3, 00:09:52.333 "num_base_bdevs_discovered": 1, 00:09:52.333 "num_base_bdevs_operational": 3, 00:09:52.333 "base_bdevs_list": [ 00:09:52.333 { 00:09:52.333 "name": null, 00:09:52.333 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:52.333 "is_configured": false, 00:09:52.333 "data_offset": 0, 00:09:52.333 "data_size": 65536 00:09:52.333 }, 00:09:52.333 { 00:09:52.333 "name": null, 00:09:52.333 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:52.333 "is_configured": false, 00:09:52.333 "data_offset": 0, 00:09:52.333 "data_size": 65536 00:09:52.333 }, 00:09:52.333 { 00:09:52.333 "name": "BaseBdev3", 00:09:52.333 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:52.333 "is_configured": true, 00:09:52.333 "data_offset": 0, 00:09:52.333 "data_size": 65536 00:09:52.333 } 00:09:52.333 ] 00:09:52.333 }' 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.333 16:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.595 [2024-11-08 16:51:22.099290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.595 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.854 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.854 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.854 "name": "Existed_Raid", 00:09:52.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.854 "strip_size_kb": 64, 00:09:52.854 "state": "configuring", 00:09:52.854 "raid_level": "raid0", 00:09:52.854 "superblock": false, 00:09:52.854 "num_base_bdevs": 3, 00:09:52.854 "num_base_bdevs_discovered": 2, 00:09:52.854 "num_base_bdevs_operational": 3, 00:09:52.854 "base_bdevs_list": [ 00:09:52.854 { 00:09:52.854 "name": null, 00:09:52.854 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:52.854 "is_configured": false, 00:09:52.854 "data_offset": 0, 00:09:52.854 "data_size": 65536 00:09:52.854 }, 00:09:52.854 { 00:09:52.854 "name": "BaseBdev2", 00:09:52.854 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:52.854 "is_configured": true, 00:09:52.854 "data_offset": 0, 00:09:52.854 "data_size": 65536 00:09:52.854 }, 00:09:52.854 { 00:09:52.854 "name": "BaseBdev3", 00:09:52.854 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:52.854 "is_configured": true, 00:09:52.854 "data_offset": 0, 00:09:52.854 "data_size": 65536 00:09:52.854 } 00:09:52.854 ] 00:09:52.854 }' 00:09:52.854 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.854 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b081e24-90d4-4c04-9ee5-9fc549b4f262 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 [2024-11-08 16:51:22.549520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:53.113 [2024-11-08 16:51:22.549561] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:53.113 [2024-11-08 16:51:22.549571] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:53.113 [2024-11-08 16:51:22.549846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:53.113 [2024-11-08 16:51:22.549965] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:53.113 [2024-11-08 16:51:22.549974] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:53.113 [2024-11-08 16:51:22.550172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.113 NewBaseBdev 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 [ 00:09:53.113 { 00:09:53.113 "name": "NewBaseBdev", 00:09:53.113 "aliases": [ 00:09:53.113 "8b081e24-90d4-4c04-9ee5-9fc549b4f262" 00:09:53.113 ], 00:09:53.113 "product_name": "Malloc disk", 00:09:53.113 "block_size": 512, 00:09:53.113 "num_blocks": 65536, 00:09:53.113 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:53.113 "assigned_rate_limits": { 00:09:53.113 "rw_ios_per_sec": 0, 00:09:53.113 "rw_mbytes_per_sec": 0, 00:09:53.113 "r_mbytes_per_sec": 0, 00:09:53.113 "w_mbytes_per_sec": 0 00:09:53.113 }, 00:09:53.113 "claimed": true, 00:09:53.113 "claim_type": "exclusive_write", 00:09:53.113 "zoned": false, 00:09:53.113 "supported_io_types": { 00:09:53.113 "read": true, 00:09:53.113 "write": true, 00:09:53.113 "unmap": true, 00:09:53.113 "flush": true, 00:09:53.113 "reset": true, 00:09:53.113 "nvme_admin": false, 00:09:53.113 "nvme_io": false, 00:09:53.113 "nvme_io_md": false, 00:09:53.113 "write_zeroes": true, 00:09:53.113 "zcopy": true, 00:09:53.113 "get_zone_info": false, 00:09:53.113 "zone_management": false, 00:09:53.113 "zone_append": false, 00:09:53.113 "compare": false, 00:09:53.113 "compare_and_write": false, 00:09:53.113 "abort": true, 00:09:53.113 "seek_hole": false, 00:09:53.113 "seek_data": false, 00:09:53.113 "copy": true, 00:09:53.113 "nvme_iov_md": false 00:09:53.113 }, 00:09:53.113 "memory_domains": [ 00:09:53.113 { 00:09:53.113 "dma_device_id": "system", 00:09:53.113 "dma_device_type": 1 00:09:53.113 }, 00:09:53.113 { 00:09:53.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.113 "dma_device_type": 2 00:09:53.113 } 00:09:53.113 ], 00:09:53.113 "driver_specific": {} 00:09:53.113 } 00:09:53.113 ] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.113 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.373 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.373 "name": "Existed_Raid", 00:09:53.373 "uuid": "06ff2845-7f2f-4433-b9b3-ed60f6f09af1", 00:09:53.373 "strip_size_kb": 64, 00:09:53.373 "state": "online", 00:09:53.373 "raid_level": "raid0", 00:09:53.373 "superblock": false, 00:09:53.373 "num_base_bdevs": 3, 00:09:53.373 "num_base_bdevs_discovered": 3, 00:09:53.373 "num_base_bdevs_operational": 3, 00:09:53.373 "base_bdevs_list": [ 00:09:53.373 { 00:09:53.373 "name": "NewBaseBdev", 00:09:53.373 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:53.373 "is_configured": true, 00:09:53.373 "data_offset": 0, 00:09:53.373 "data_size": 65536 00:09:53.373 }, 00:09:53.373 { 00:09:53.373 "name": "BaseBdev2", 00:09:53.373 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:53.373 "is_configured": true, 00:09:53.373 "data_offset": 0, 00:09:53.373 "data_size": 65536 00:09:53.373 }, 00:09:53.373 { 00:09:53.373 "name": "BaseBdev3", 00:09:53.373 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:53.373 "is_configured": true, 00:09:53.373 "data_offset": 0, 00:09:53.373 "data_size": 65536 00:09:53.373 } 00:09:53.373 ] 00:09:53.373 }' 00:09:53.373 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.373 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.632 [2024-11-08 16:51:22.973146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.632 16:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.632 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.632 "name": "Existed_Raid", 00:09:53.632 "aliases": [ 00:09:53.632 "06ff2845-7f2f-4433-b9b3-ed60f6f09af1" 00:09:53.632 ], 00:09:53.632 "product_name": "Raid Volume", 00:09:53.632 "block_size": 512, 00:09:53.632 "num_blocks": 196608, 00:09:53.632 "uuid": "06ff2845-7f2f-4433-b9b3-ed60f6f09af1", 00:09:53.632 "assigned_rate_limits": { 00:09:53.632 "rw_ios_per_sec": 0, 00:09:53.632 "rw_mbytes_per_sec": 0, 00:09:53.632 "r_mbytes_per_sec": 0, 00:09:53.632 "w_mbytes_per_sec": 0 00:09:53.632 }, 00:09:53.632 "claimed": false, 00:09:53.632 "zoned": false, 00:09:53.632 "supported_io_types": { 00:09:53.632 "read": true, 00:09:53.632 "write": true, 00:09:53.632 "unmap": true, 00:09:53.632 "flush": true, 00:09:53.632 "reset": true, 00:09:53.632 "nvme_admin": false, 00:09:53.632 "nvme_io": false, 00:09:53.632 "nvme_io_md": false, 00:09:53.632 "write_zeroes": true, 00:09:53.632 "zcopy": false, 00:09:53.632 "get_zone_info": false, 00:09:53.632 "zone_management": false, 00:09:53.632 "zone_append": false, 00:09:53.632 "compare": false, 00:09:53.632 "compare_and_write": false, 00:09:53.632 "abort": false, 00:09:53.632 "seek_hole": false, 00:09:53.632 "seek_data": false, 00:09:53.632 "copy": false, 00:09:53.632 "nvme_iov_md": false 00:09:53.632 }, 00:09:53.632 "memory_domains": [ 00:09:53.632 { 00:09:53.632 "dma_device_id": "system", 00:09:53.632 "dma_device_type": 1 00:09:53.632 }, 00:09:53.632 { 00:09:53.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.632 "dma_device_type": 2 00:09:53.632 }, 00:09:53.632 { 00:09:53.632 "dma_device_id": "system", 00:09:53.632 "dma_device_type": 1 00:09:53.632 }, 00:09:53.632 { 00:09:53.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.632 "dma_device_type": 2 00:09:53.632 }, 00:09:53.632 { 00:09:53.632 "dma_device_id": "system", 00:09:53.632 "dma_device_type": 1 00:09:53.632 }, 00:09:53.632 { 00:09:53.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.632 "dma_device_type": 2 00:09:53.632 } 00:09:53.632 ], 00:09:53.632 "driver_specific": { 00:09:53.632 "raid": { 00:09:53.632 "uuid": "06ff2845-7f2f-4433-b9b3-ed60f6f09af1", 00:09:53.632 "strip_size_kb": 64, 00:09:53.632 "state": "online", 00:09:53.632 "raid_level": "raid0", 00:09:53.632 "superblock": false, 00:09:53.632 "num_base_bdevs": 3, 00:09:53.633 "num_base_bdevs_discovered": 3, 00:09:53.633 "num_base_bdevs_operational": 3, 00:09:53.633 "base_bdevs_list": [ 00:09:53.633 { 00:09:53.633 "name": "NewBaseBdev", 00:09:53.633 "uuid": "8b081e24-90d4-4c04-9ee5-9fc549b4f262", 00:09:53.633 "is_configured": true, 00:09:53.633 "data_offset": 0, 00:09:53.633 "data_size": 65536 00:09:53.633 }, 00:09:53.633 { 00:09:53.633 "name": "BaseBdev2", 00:09:53.633 "uuid": "72e0a4c7-812d-489f-817f-e85cf7a0db33", 00:09:53.633 "is_configured": true, 00:09:53.633 "data_offset": 0, 00:09:53.633 "data_size": 65536 00:09:53.633 }, 00:09:53.633 { 00:09:53.633 "name": "BaseBdev3", 00:09:53.633 "uuid": "06641a1c-fa79-4391-833b-e2f5cbca0243", 00:09:53.633 "is_configured": true, 00:09:53.633 "data_offset": 0, 00:09:53.633 "data_size": 65536 00:09:53.633 } 00:09:53.633 ] 00:09:53.633 } 00:09:53.633 } 00:09:53.633 }' 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:53.633 BaseBdev2 00:09:53.633 BaseBdev3' 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.633 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.892 [2024-11-08 16:51:23.252378] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.892 [2024-11-08 16:51:23.252406] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.892 [2024-11-08 16:51:23.252476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.892 [2024-11-08 16:51:23.252530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.892 [2024-11-08 16:51:23.252542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75041 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75041 ']' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75041 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75041 00:09:53.892 killing process with pid 75041 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75041' 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75041 00:09:53.892 [2024-11-08 16:51:23.296953] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.892 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75041 00:09:53.892 [2024-11-08 16:51:23.328334] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:54.152 00:09:54.152 real 0m8.315s 00:09:54.152 user 0m14.139s 00:09:54.152 sys 0m1.645s 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.152 ************************************ 00:09:54.152 END TEST raid_state_function_test 00:09:54.152 ************************************ 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.152 16:51:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:54.152 16:51:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:54.152 16:51:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.152 16:51:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.152 ************************************ 00:09:54.152 START TEST raid_state_function_test_sb 00:09:54.152 ************************************ 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:54.152 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75640 00:09:54.153 Process raid pid: 75640 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75640' 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75640 00:09:54.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75640 ']' 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.153 16:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.411 [2024-11-08 16:51:23.738228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:54.412 [2024-11-08 16:51:23.738453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.412 [2024-11-08 16:51:23.899492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.670 [2024-11-08 16:51:23.944958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.670 [2024-11-08 16:51:23.987293] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.670 [2024-11-08 16:51:23.987383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.237 [2024-11-08 16:51:24.564862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.237 [2024-11-08 16:51:24.564911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.237 [2024-11-08 16:51:24.564932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.237 [2024-11-08 16:51:24.564944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.237 [2024-11-08 16:51:24.564950] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.237 [2024-11-08 16:51:24.564962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.237 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.238 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.238 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.238 "name": "Existed_Raid", 00:09:55.238 "uuid": "8c4fb698-47f9-42df-90df-b27186a4b377", 00:09:55.238 "strip_size_kb": 64, 00:09:55.238 "state": "configuring", 00:09:55.238 "raid_level": "raid0", 00:09:55.238 "superblock": true, 00:09:55.238 "num_base_bdevs": 3, 00:09:55.238 "num_base_bdevs_discovered": 0, 00:09:55.238 "num_base_bdevs_operational": 3, 00:09:55.238 "base_bdevs_list": [ 00:09:55.238 { 00:09:55.238 "name": "BaseBdev1", 00:09:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.238 "is_configured": false, 00:09:55.238 "data_offset": 0, 00:09:55.238 "data_size": 0 00:09:55.238 }, 00:09:55.238 { 00:09:55.238 "name": "BaseBdev2", 00:09:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.238 "is_configured": false, 00:09:55.238 "data_offset": 0, 00:09:55.238 "data_size": 0 00:09:55.238 }, 00:09:55.238 { 00:09:55.238 "name": "BaseBdev3", 00:09:55.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.238 "is_configured": false, 00:09:55.238 "data_offset": 0, 00:09:55.238 "data_size": 0 00:09:55.238 } 00:09:55.238 ] 00:09:55.238 }' 00:09:55.238 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.238 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 [2024-11-08 16:51:24.980096] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.497 [2024-11-08 16:51:24.980218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 [2024-11-08 16:51:24.992107] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.497 [2024-11-08 16:51:24.992190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.497 [2024-11-08 16:51:24.992216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.497 [2024-11-08 16:51:24.992239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.497 [2024-11-08 16:51:24.992257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.497 [2024-11-08 16:51:24.992277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 16:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 [2024-11-08 16:51:25.012786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.497 BaseBdev1 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.756 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.756 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.756 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.756 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.756 [ 00:09:55.756 { 00:09:55.756 "name": "BaseBdev1", 00:09:55.756 "aliases": [ 00:09:55.756 "64f34bd2-c286-436e-bfd8-c6f27ef3f222" 00:09:55.756 ], 00:09:55.756 "product_name": "Malloc disk", 00:09:55.756 "block_size": 512, 00:09:55.756 "num_blocks": 65536, 00:09:55.756 "uuid": "64f34bd2-c286-436e-bfd8-c6f27ef3f222", 00:09:55.756 "assigned_rate_limits": { 00:09:55.756 "rw_ios_per_sec": 0, 00:09:55.756 "rw_mbytes_per_sec": 0, 00:09:55.756 "r_mbytes_per_sec": 0, 00:09:55.756 "w_mbytes_per_sec": 0 00:09:55.756 }, 00:09:55.756 "claimed": true, 00:09:55.756 "claim_type": "exclusive_write", 00:09:55.757 "zoned": false, 00:09:55.757 "supported_io_types": { 00:09:55.757 "read": true, 00:09:55.757 "write": true, 00:09:55.757 "unmap": true, 00:09:55.757 "flush": true, 00:09:55.757 "reset": true, 00:09:55.757 "nvme_admin": false, 00:09:55.757 "nvme_io": false, 00:09:55.757 "nvme_io_md": false, 00:09:55.757 "write_zeroes": true, 00:09:55.757 "zcopy": true, 00:09:55.757 "get_zone_info": false, 00:09:55.757 "zone_management": false, 00:09:55.757 "zone_append": false, 00:09:55.757 "compare": false, 00:09:55.757 "compare_and_write": false, 00:09:55.757 "abort": true, 00:09:55.757 "seek_hole": false, 00:09:55.757 "seek_data": false, 00:09:55.757 "copy": true, 00:09:55.757 "nvme_iov_md": false 00:09:55.757 }, 00:09:55.757 "memory_domains": [ 00:09:55.757 { 00:09:55.757 "dma_device_id": "system", 00:09:55.757 "dma_device_type": 1 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.757 "dma_device_type": 2 00:09:55.757 } 00:09:55.757 ], 00:09:55.757 "driver_specific": {} 00:09:55.757 } 00:09:55.757 ] 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.757 "name": "Existed_Raid", 00:09:55.757 "uuid": "2a63c2e2-c20b-461a-b86a-fbde838495c7", 00:09:55.757 "strip_size_kb": 64, 00:09:55.757 "state": "configuring", 00:09:55.757 "raid_level": "raid0", 00:09:55.757 "superblock": true, 00:09:55.757 "num_base_bdevs": 3, 00:09:55.757 "num_base_bdevs_discovered": 1, 00:09:55.757 "num_base_bdevs_operational": 3, 00:09:55.757 "base_bdevs_list": [ 00:09:55.757 { 00:09:55.757 "name": "BaseBdev1", 00:09:55.757 "uuid": "64f34bd2-c286-436e-bfd8-c6f27ef3f222", 00:09:55.757 "is_configured": true, 00:09:55.757 "data_offset": 2048, 00:09:55.757 "data_size": 63488 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "name": "BaseBdev2", 00:09:55.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.757 "is_configured": false, 00:09:55.757 "data_offset": 0, 00:09:55.757 "data_size": 0 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "name": "BaseBdev3", 00:09:55.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.757 "is_configured": false, 00:09:55.757 "data_offset": 0, 00:09:55.757 "data_size": 0 00:09:55.757 } 00:09:55.757 ] 00:09:55.757 }' 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.757 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.015 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.015 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.015 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.015 [2024-11-08 16:51:25.488047] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.015 [2024-11-08 16:51:25.488102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.016 [2024-11-08 16:51:25.500063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.016 [2024-11-08 16:51:25.501906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.016 [2024-11-08 16:51:25.501947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.016 [2024-11-08 16:51:25.501957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.016 [2024-11-08 16:51:25.501967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.016 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.274 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.274 "name": "Existed_Raid", 00:09:56.274 "uuid": "45ac7aa6-a534-41e8-9c9d-f5002f172174", 00:09:56.274 "strip_size_kb": 64, 00:09:56.274 "state": "configuring", 00:09:56.274 "raid_level": "raid0", 00:09:56.274 "superblock": true, 00:09:56.274 "num_base_bdevs": 3, 00:09:56.274 "num_base_bdevs_discovered": 1, 00:09:56.274 "num_base_bdevs_operational": 3, 00:09:56.274 "base_bdevs_list": [ 00:09:56.274 { 00:09:56.274 "name": "BaseBdev1", 00:09:56.274 "uuid": "64f34bd2-c286-436e-bfd8-c6f27ef3f222", 00:09:56.274 "is_configured": true, 00:09:56.274 "data_offset": 2048, 00:09:56.274 "data_size": 63488 00:09:56.274 }, 00:09:56.274 { 00:09:56.274 "name": "BaseBdev2", 00:09:56.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.274 "is_configured": false, 00:09:56.274 "data_offset": 0, 00:09:56.274 "data_size": 0 00:09:56.274 }, 00:09:56.274 { 00:09:56.274 "name": "BaseBdev3", 00:09:56.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.274 "is_configured": false, 00:09:56.274 "data_offset": 0, 00:09:56.274 "data_size": 0 00:09:56.274 } 00:09:56.274 ] 00:09:56.274 }' 00:09:56.274 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.274 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.534 [2024-11-08 16:51:25.951292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.534 BaseBdev2 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.534 [ 00:09:56.534 { 00:09:56.534 "name": "BaseBdev2", 00:09:56.534 "aliases": [ 00:09:56.534 "7e2a2a03-0594-41d2-9158-cc695e2b29e7" 00:09:56.534 ], 00:09:56.534 "product_name": "Malloc disk", 00:09:56.534 "block_size": 512, 00:09:56.534 "num_blocks": 65536, 00:09:56.534 "uuid": "7e2a2a03-0594-41d2-9158-cc695e2b29e7", 00:09:56.534 "assigned_rate_limits": { 00:09:56.534 "rw_ios_per_sec": 0, 00:09:56.534 "rw_mbytes_per_sec": 0, 00:09:56.534 "r_mbytes_per_sec": 0, 00:09:56.534 "w_mbytes_per_sec": 0 00:09:56.534 }, 00:09:56.534 "claimed": true, 00:09:56.534 "claim_type": "exclusive_write", 00:09:56.534 "zoned": false, 00:09:56.534 "supported_io_types": { 00:09:56.534 "read": true, 00:09:56.534 "write": true, 00:09:56.534 "unmap": true, 00:09:56.534 "flush": true, 00:09:56.534 "reset": true, 00:09:56.534 "nvme_admin": false, 00:09:56.534 "nvme_io": false, 00:09:56.534 "nvme_io_md": false, 00:09:56.534 "write_zeroes": true, 00:09:56.534 "zcopy": true, 00:09:56.534 "get_zone_info": false, 00:09:56.534 "zone_management": false, 00:09:56.534 "zone_append": false, 00:09:56.534 "compare": false, 00:09:56.534 "compare_and_write": false, 00:09:56.534 "abort": true, 00:09:56.534 "seek_hole": false, 00:09:56.534 "seek_data": false, 00:09:56.534 "copy": true, 00:09:56.534 "nvme_iov_md": false 00:09:56.534 }, 00:09:56.534 "memory_domains": [ 00:09:56.534 { 00:09:56.534 "dma_device_id": "system", 00:09:56.534 "dma_device_type": 1 00:09:56.534 }, 00:09:56.534 { 00:09:56.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.534 "dma_device_type": 2 00:09:56.534 } 00:09:56.534 ], 00:09:56.534 "driver_specific": {} 00:09:56.534 } 00:09:56.534 ] 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.534 16:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.534 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.534 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.534 "name": "Existed_Raid", 00:09:56.534 "uuid": "45ac7aa6-a534-41e8-9c9d-f5002f172174", 00:09:56.534 "strip_size_kb": 64, 00:09:56.534 "state": "configuring", 00:09:56.534 "raid_level": "raid0", 00:09:56.534 "superblock": true, 00:09:56.534 "num_base_bdevs": 3, 00:09:56.534 "num_base_bdevs_discovered": 2, 00:09:56.534 "num_base_bdevs_operational": 3, 00:09:56.534 "base_bdevs_list": [ 00:09:56.534 { 00:09:56.534 "name": "BaseBdev1", 00:09:56.534 "uuid": "64f34bd2-c286-436e-bfd8-c6f27ef3f222", 00:09:56.534 "is_configured": true, 00:09:56.534 "data_offset": 2048, 00:09:56.534 "data_size": 63488 00:09:56.534 }, 00:09:56.534 { 00:09:56.534 "name": "BaseBdev2", 00:09:56.534 "uuid": "7e2a2a03-0594-41d2-9158-cc695e2b29e7", 00:09:56.534 "is_configured": true, 00:09:56.534 "data_offset": 2048, 00:09:56.534 "data_size": 63488 00:09:56.534 }, 00:09:56.534 { 00:09:56.534 "name": "BaseBdev3", 00:09:56.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.534 "is_configured": false, 00:09:56.534 "data_offset": 0, 00:09:56.534 "data_size": 0 00:09:56.534 } 00:09:56.534 ] 00:09:56.534 }' 00:09:56.534 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.534 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.102 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.102 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.102 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.102 [2024-11-08 16:51:26.445362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.102 [2024-11-08 16:51:26.445566] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:57.102 [2024-11-08 16:51:26.445595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.102 BaseBdev3 00:09:57.102 [2024-11-08 16:51:26.445906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:57.102 [2024-11-08 16:51:26.446052] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:57.103 [2024-11-08 16:51:26.446067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:57.103 [2024-11-08 16:51:26.446186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.103 [ 00:09:57.103 { 00:09:57.103 "name": "BaseBdev3", 00:09:57.103 "aliases": [ 00:09:57.103 "a65e725f-3731-4d76-a5fe-5372879cc966" 00:09:57.103 ], 00:09:57.103 "product_name": "Malloc disk", 00:09:57.103 "block_size": 512, 00:09:57.103 "num_blocks": 65536, 00:09:57.103 "uuid": "a65e725f-3731-4d76-a5fe-5372879cc966", 00:09:57.103 "assigned_rate_limits": { 00:09:57.103 "rw_ios_per_sec": 0, 00:09:57.103 "rw_mbytes_per_sec": 0, 00:09:57.103 "r_mbytes_per_sec": 0, 00:09:57.103 "w_mbytes_per_sec": 0 00:09:57.103 }, 00:09:57.103 "claimed": true, 00:09:57.103 "claim_type": "exclusive_write", 00:09:57.103 "zoned": false, 00:09:57.103 "supported_io_types": { 00:09:57.103 "read": true, 00:09:57.103 "write": true, 00:09:57.103 "unmap": true, 00:09:57.103 "flush": true, 00:09:57.103 "reset": true, 00:09:57.103 "nvme_admin": false, 00:09:57.103 "nvme_io": false, 00:09:57.103 "nvme_io_md": false, 00:09:57.103 "write_zeroes": true, 00:09:57.103 "zcopy": true, 00:09:57.103 "get_zone_info": false, 00:09:57.103 "zone_management": false, 00:09:57.103 "zone_append": false, 00:09:57.103 "compare": false, 00:09:57.103 "compare_and_write": false, 00:09:57.103 "abort": true, 00:09:57.103 "seek_hole": false, 00:09:57.103 "seek_data": false, 00:09:57.103 "copy": true, 00:09:57.103 "nvme_iov_md": false 00:09:57.103 }, 00:09:57.103 "memory_domains": [ 00:09:57.103 { 00:09:57.103 "dma_device_id": "system", 00:09:57.103 "dma_device_type": 1 00:09:57.103 }, 00:09:57.103 { 00:09:57.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.103 "dma_device_type": 2 00:09:57.103 } 00:09:57.103 ], 00:09:57.103 "driver_specific": {} 00:09:57.103 } 00:09:57.103 ] 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.103 "name": "Existed_Raid", 00:09:57.103 "uuid": "45ac7aa6-a534-41e8-9c9d-f5002f172174", 00:09:57.103 "strip_size_kb": 64, 00:09:57.103 "state": "online", 00:09:57.103 "raid_level": "raid0", 00:09:57.103 "superblock": true, 00:09:57.103 "num_base_bdevs": 3, 00:09:57.103 "num_base_bdevs_discovered": 3, 00:09:57.103 "num_base_bdevs_operational": 3, 00:09:57.103 "base_bdevs_list": [ 00:09:57.103 { 00:09:57.103 "name": "BaseBdev1", 00:09:57.103 "uuid": "64f34bd2-c286-436e-bfd8-c6f27ef3f222", 00:09:57.103 "is_configured": true, 00:09:57.103 "data_offset": 2048, 00:09:57.103 "data_size": 63488 00:09:57.103 }, 00:09:57.103 { 00:09:57.103 "name": "BaseBdev2", 00:09:57.103 "uuid": "7e2a2a03-0594-41d2-9158-cc695e2b29e7", 00:09:57.103 "is_configured": true, 00:09:57.103 "data_offset": 2048, 00:09:57.103 "data_size": 63488 00:09:57.103 }, 00:09:57.103 { 00:09:57.103 "name": "BaseBdev3", 00:09:57.103 "uuid": "a65e725f-3731-4d76-a5fe-5372879cc966", 00:09:57.103 "is_configured": true, 00:09:57.103 "data_offset": 2048, 00:09:57.103 "data_size": 63488 00:09:57.103 } 00:09:57.103 ] 00:09:57.103 }' 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.103 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.671 [2024-11-08 16:51:26.900922] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.671 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.671 "name": "Existed_Raid", 00:09:57.671 "aliases": [ 00:09:57.671 "45ac7aa6-a534-41e8-9c9d-f5002f172174" 00:09:57.671 ], 00:09:57.671 "product_name": "Raid Volume", 00:09:57.671 "block_size": 512, 00:09:57.671 "num_blocks": 190464, 00:09:57.671 "uuid": "45ac7aa6-a534-41e8-9c9d-f5002f172174", 00:09:57.671 "assigned_rate_limits": { 00:09:57.671 "rw_ios_per_sec": 0, 00:09:57.671 "rw_mbytes_per_sec": 0, 00:09:57.671 "r_mbytes_per_sec": 0, 00:09:57.671 "w_mbytes_per_sec": 0 00:09:57.671 }, 00:09:57.671 "claimed": false, 00:09:57.671 "zoned": false, 00:09:57.671 "supported_io_types": { 00:09:57.671 "read": true, 00:09:57.671 "write": true, 00:09:57.671 "unmap": true, 00:09:57.671 "flush": true, 00:09:57.671 "reset": true, 00:09:57.671 "nvme_admin": false, 00:09:57.671 "nvme_io": false, 00:09:57.671 "nvme_io_md": false, 00:09:57.671 "write_zeroes": true, 00:09:57.671 "zcopy": false, 00:09:57.671 "get_zone_info": false, 00:09:57.671 "zone_management": false, 00:09:57.671 "zone_append": false, 00:09:57.671 "compare": false, 00:09:57.671 "compare_and_write": false, 00:09:57.671 "abort": false, 00:09:57.671 "seek_hole": false, 00:09:57.671 "seek_data": false, 00:09:57.671 "copy": false, 00:09:57.671 "nvme_iov_md": false 00:09:57.671 }, 00:09:57.671 "memory_domains": [ 00:09:57.671 { 00:09:57.671 "dma_device_id": "system", 00:09:57.671 "dma_device_type": 1 00:09:57.671 }, 00:09:57.671 { 00:09:57.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.671 "dma_device_type": 2 00:09:57.671 }, 00:09:57.671 { 00:09:57.671 "dma_device_id": "system", 00:09:57.671 "dma_device_type": 1 00:09:57.671 }, 00:09:57.671 { 00:09:57.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.671 "dma_device_type": 2 00:09:57.671 }, 00:09:57.671 { 00:09:57.671 "dma_device_id": "system", 00:09:57.671 "dma_device_type": 1 00:09:57.671 }, 00:09:57.671 { 00:09:57.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.671 "dma_device_type": 2 00:09:57.671 } 00:09:57.671 ], 00:09:57.671 "driver_specific": { 00:09:57.671 "raid": { 00:09:57.672 "uuid": "45ac7aa6-a534-41e8-9c9d-f5002f172174", 00:09:57.672 "strip_size_kb": 64, 00:09:57.672 "state": "online", 00:09:57.672 "raid_level": "raid0", 00:09:57.672 "superblock": true, 00:09:57.672 "num_base_bdevs": 3, 00:09:57.672 "num_base_bdevs_discovered": 3, 00:09:57.672 "num_base_bdevs_operational": 3, 00:09:57.672 "base_bdevs_list": [ 00:09:57.672 { 00:09:57.672 "name": "BaseBdev1", 00:09:57.672 "uuid": "64f34bd2-c286-436e-bfd8-c6f27ef3f222", 00:09:57.672 "is_configured": true, 00:09:57.672 "data_offset": 2048, 00:09:57.672 "data_size": 63488 00:09:57.672 }, 00:09:57.672 { 00:09:57.672 "name": "BaseBdev2", 00:09:57.672 "uuid": "7e2a2a03-0594-41d2-9158-cc695e2b29e7", 00:09:57.672 "is_configured": true, 00:09:57.672 "data_offset": 2048, 00:09:57.672 "data_size": 63488 00:09:57.672 }, 00:09:57.672 { 00:09:57.672 "name": "BaseBdev3", 00:09:57.672 "uuid": "a65e725f-3731-4d76-a5fe-5372879cc966", 00:09:57.672 "is_configured": true, 00:09:57.672 "data_offset": 2048, 00:09:57.672 "data_size": 63488 00:09:57.672 } 00:09:57.672 ] 00:09:57.672 } 00:09:57.672 } 00:09:57.672 }' 00:09:57.672 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.672 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.672 BaseBdev2 00:09:57.672 BaseBdev3' 00:09:57.672 16:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.672 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.672 [2024-11-08 16:51:27.192199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.672 [2024-11-08 16:51:27.192261] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.672 [2024-11-08 16:51:27.192344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.931 "name": "Existed_Raid", 00:09:57.931 "uuid": "45ac7aa6-a534-41e8-9c9d-f5002f172174", 00:09:57.931 "strip_size_kb": 64, 00:09:57.931 "state": "offline", 00:09:57.931 "raid_level": "raid0", 00:09:57.931 "superblock": true, 00:09:57.931 "num_base_bdevs": 3, 00:09:57.931 "num_base_bdevs_discovered": 2, 00:09:57.931 "num_base_bdevs_operational": 2, 00:09:57.931 "base_bdevs_list": [ 00:09:57.931 { 00:09:57.931 "name": null, 00:09:57.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.931 "is_configured": false, 00:09:57.931 "data_offset": 0, 00:09:57.931 "data_size": 63488 00:09:57.931 }, 00:09:57.931 { 00:09:57.931 "name": "BaseBdev2", 00:09:57.931 "uuid": "7e2a2a03-0594-41d2-9158-cc695e2b29e7", 00:09:57.931 "is_configured": true, 00:09:57.931 "data_offset": 2048, 00:09:57.931 "data_size": 63488 00:09:57.931 }, 00:09:57.931 { 00:09:57.931 "name": "BaseBdev3", 00:09:57.931 "uuid": "a65e725f-3731-4d76-a5fe-5372879cc966", 00:09:57.931 "is_configured": true, 00:09:57.931 "data_offset": 2048, 00:09:57.931 "data_size": 63488 00:09:57.931 } 00:09:57.931 ] 00:09:57.931 }' 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.931 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.190 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.478 [2024-11-08 16:51:27.734918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.478 [2024-11-08 16:51:27.806237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.478 [2024-11-08 16:51:27.806288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.478 BaseBdev2 00:09:58.478 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.479 [ 00:09:58.479 { 00:09:58.479 "name": "BaseBdev2", 00:09:58.479 "aliases": [ 00:09:58.479 "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c" 00:09:58.479 ], 00:09:58.479 "product_name": "Malloc disk", 00:09:58.479 "block_size": 512, 00:09:58.479 "num_blocks": 65536, 00:09:58.479 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:09:58.479 "assigned_rate_limits": { 00:09:58.479 "rw_ios_per_sec": 0, 00:09:58.479 "rw_mbytes_per_sec": 0, 00:09:58.479 "r_mbytes_per_sec": 0, 00:09:58.479 "w_mbytes_per_sec": 0 00:09:58.479 }, 00:09:58.479 "claimed": false, 00:09:58.479 "zoned": false, 00:09:58.479 "supported_io_types": { 00:09:58.479 "read": true, 00:09:58.479 "write": true, 00:09:58.479 "unmap": true, 00:09:58.479 "flush": true, 00:09:58.479 "reset": true, 00:09:58.479 "nvme_admin": false, 00:09:58.479 "nvme_io": false, 00:09:58.479 "nvme_io_md": false, 00:09:58.479 "write_zeroes": true, 00:09:58.479 "zcopy": true, 00:09:58.479 "get_zone_info": false, 00:09:58.479 "zone_management": false, 00:09:58.479 "zone_append": false, 00:09:58.479 "compare": false, 00:09:58.479 "compare_and_write": false, 00:09:58.479 "abort": true, 00:09:58.479 "seek_hole": false, 00:09:58.479 "seek_data": false, 00:09:58.479 "copy": true, 00:09:58.479 "nvme_iov_md": false 00:09:58.479 }, 00:09:58.479 "memory_domains": [ 00:09:58.479 { 00:09:58.479 "dma_device_id": "system", 00:09:58.479 "dma_device_type": 1 00:09:58.479 }, 00:09:58.479 { 00:09:58.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.479 "dma_device_type": 2 00:09:58.479 } 00:09:58.479 ], 00:09:58.479 "driver_specific": {} 00:09:58.479 } 00:09:58.479 ] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.479 BaseBdev3 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.479 [ 00:09:58.479 { 00:09:58.479 "name": "BaseBdev3", 00:09:58.479 "aliases": [ 00:09:58.479 "83c22c46-7aad-42f5-b6c4-12f5a591a217" 00:09:58.479 ], 00:09:58.479 "product_name": "Malloc disk", 00:09:58.479 "block_size": 512, 00:09:58.479 "num_blocks": 65536, 00:09:58.479 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:09:58.479 "assigned_rate_limits": { 00:09:58.479 "rw_ios_per_sec": 0, 00:09:58.479 "rw_mbytes_per_sec": 0, 00:09:58.479 "r_mbytes_per_sec": 0, 00:09:58.479 "w_mbytes_per_sec": 0 00:09:58.479 }, 00:09:58.479 "claimed": false, 00:09:58.479 "zoned": false, 00:09:58.479 "supported_io_types": { 00:09:58.479 "read": true, 00:09:58.479 "write": true, 00:09:58.479 "unmap": true, 00:09:58.479 "flush": true, 00:09:58.479 "reset": true, 00:09:58.479 "nvme_admin": false, 00:09:58.479 "nvme_io": false, 00:09:58.479 "nvme_io_md": false, 00:09:58.479 "write_zeroes": true, 00:09:58.479 "zcopy": true, 00:09:58.479 "get_zone_info": false, 00:09:58.479 "zone_management": false, 00:09:58.479 "zone_append": false, 00:09:58.479 "compare": false, 00:09:58.479 "compare_and_write": false, 00:09:58.479 "abort": true, 00:09:58.479 "seek_hole": false, 00:09:58.479 "seek_data": false, 00:09:58.479 "copy": true, 00:09:58.479 "nvme_iov_md": false 00:09:58.479 }, 00:09:58.479 "memory_domains": [ 00:09:58.479 { 00:09:58.479 "dma_device_id": "system", 00:09:58.479 "dma_device_type": 1 00:09:58.479 }, 00:09:58.479 { 00:09:58.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.479 "dma_device_type": 2 00:09:58.479 } 00:09:58.479 ], 00:09:58.479 "driver_specific": {} 00:09:58.479 } 00:09:58.479 ] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.479 [2024-11-08 16:51:27.982559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.479 [2024-11-08 16:51:27.982653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.479 [2024-11-08 16:51:27.982697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.479 [2024-11-08 16:51:27.984496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.479 16:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.742 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.742 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.742 "name": "Existed_Raid", 00:09:58.742 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:09:58.742 "strip_size_kb": 64, 00:09:58.742 "state": "configuring", 00:09:58.742 "raid_level": "raid0", 00:09:58.742 "superblock": true, 00:09:58.742 "num_base_bdevs": 3, 00:09:58.742 "num_base_bdevs_discovered": 2, 00:09:58.742 "num_base_bdevs_operational": 3, 00:09:58.742 "base_bdevs_list": [ 00:09:58.742 { 00:09:58.742 "name": "BaseBdev1", 00:09:58.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.742 "is_configured": false, 00:09:58.743 "data_offset": 0, 00:09:58.743 "data_size": 0 00:09:58.743 }, 00:09:58.743 { 00:09:58.743 "name": "BaseBdev2", 00:09:58.743 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:09:58.743 "is_configured": true, 00:09:58.743 "data_offset": 2048, 00:09:58.743 "data_size": 63488 00:09:58.743 }, 00:09:58.743 { 00:09:58.743 "name": "BaseBdev3", 00:09:58.743 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:09:58.743 "is_configured": true, 00:09:58.743 "data_offset": 2048, 00:09:58.743 "data_size": 63488 00:09:58.743 } 00:09:58.743 ] 00:09:58.743 }' 00:09:58.743 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.743 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.004 [2024-11-08 16:51:28.409848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.004 "name": "Existed_Raid", 00:09:59.004 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:09:59.004 "strip_size_kb": 64, 00:09:59.004 "state": "configuring", 00:09:59.004 "raid_level": "raid0", 00:09:59.004 "superblock": true, 00:09:59.004 "num_base_bdevs": 3, 00:09:59.004 "num_base_bdevs_discovered": 1, 00:09:59.004 "num_base_bdevs_operational": 3, 00:09:59.004 "base_bdevs_list": [ 00:09:59.004 { 00:09:59.004 "name": "BaseBdev1", 00:09:59.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.004 "is_configured": false, 00:09:59.004 "data_offset": 0, 00:09:59.004 "data_size": 0 00:09:59.004 }, 00:09:59.004 { 00:09:59.004 "name": null, 00:09:59.004 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:09:59.004 "is_configured": false, 00:09:59.004 "data_offset": 0, 00:09:59.004 "data_size": 63488 00:09:59.004 }, 00:09:59.004 { 00:09:59.004 "name": "BaseBdev3", 00:09:59.004 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:09:59.004 "is_configured": true, 00:09:59.004 "data_offset": 2048, 00:09:59.004 "data_size": 63488 00:09:59.004 } 00:09:59.004 ] 00:09:59.004 }' 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.004 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.571 [2024-11-08 16:51:28.947826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.571 BaseBdev1 00:09:59.571 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 [ 00:09:59.572 { 00:09:59.572 "name": "BaseBdev1", 00:09:59.572 "aliases": [ 00:09:59.572 "fdfe7dca-cbab-4e84-8be4-17fb588302e2" 00:09:59.572 ], 00:09:59.572 "product_name": "Malloc disk", 00:09:59.572 "block_size": 512, 00:09:59.572 "num_blocks": 65536, 00:09:59.572 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:09:59.572 "assigned_rate_limits": { 00:09:59.572 "rw_ios_per_sec": 0, 00:09:59.572 "rw_mbytes_per_sec": 0, 00:09:59.572 "r_mbytes_per_sec": 0, 00:09:59.572 "w_mbytes_per_sec": 0 00:09:59.572 }, 00:09:59.572 "claimed": true, 00:09:59.572 "claim_type": "exclusive_write", 00:09:59.572 "zoned": false, 00:09:59.572 "supported_io_types": { 00:09:59.572 "read": true, 00:09:59.572 "write": true, 00:09:59.572 "unmap": true, 00:09:59.572 "flush": true, 00:09:59.572 "reset": true, 00:09:59.572 "nvme_admin": false, 00:09:59.572 "nvme_io": false, 00:09:59.572 "nvme_io_md": false, 00:09:59.572 "write_zeroes": true, 00:09:59.572 "zcopy": true, 00:09:59.572 "get_zone_info": false, 00:09:59.572 "zone_management": false, 00:09:59.572 "zone_append": false, 00:09:59.572 "compare": false, 00:09:59.572 "compare_and_write": false, 00:09:59.572 "abort": true, 00:09:59.572 "seek_hole": false, 00:09:59.572 "seek_data": false, 00:09:59.572 "copy": true, 00:09:59.572 "nvme_iov_md": false 00:09:59.572 }, 00:09:59.572 "memory_domains": [ 00:09:59.572 { 00:09:59.572 "dma_device_id": "system", 00:09:59.572 "dma_device_type": 1 00:09:59.572 }, 00:09:59.572 { 00:09:59.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.572 "dma_device_type": 2 00:09:59.572 } 00:09:59.572 ], 00:09:59.572 "driver_specific": {} 00:09:59.572 } 00:09:59.572 ] 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.572 16:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.572 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.572 "name": "Existed_Raid", 00:09:59.572 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:09:59.572 "strip_size_kb": 64, 00:09:59.572 "state": "configuring", 00:09:59.572 "raid_level": "raid0", 00:09:59.572 "superblock": true, 00:09:59.572 "num_base_bdevs": 3, 00:09:59.572 "num_base_bdevs_discovered": 2, 00:09:59.572 "num_base_bdevs_operational": 3, 00:09:59.572 "base_bdevs_list": [ 00:09:59.572 { 00:09:59.572 "name": "BaseBdev1", 00:09:59.572 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:09:59.572 "is_configured": true, 00:09:59.572 "data_offset": 2048, 00:09:59.572 "data_size": 63488 00:09:59.572 }, 00:09:59.572 { 00:09:59.572 "name": null, 00:09:59.572 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:09:59.572 "is_configured": false, 00:09:59.572 "data_offset": 0, 00:09:59.572 "data_size": 63488 00:09:59.572 }, 00:09:59.572 { 00:09:59.572 "name": "BaseBdev3", 00:09:59.572 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:09:59.572 "is_configured": true, 00:09:59.572 "data_offset": 2048, 00:09:59.572 "data_size": 63488 00:09:59.572 } 00:09:59.572 ] 00:09:59.572 }' 00:09:59.572 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.572 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.140 [2024-11-08 16:51:29.447073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.140 "name": "Existed_Raid", 00:10:00.140 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:00.140 "strip_size_kb": 64, 00:10:00.140 "state": "configuring", 00:10:00.140 "raid_level": "raid0", 00:10:00.140 "superblock": true, 00:10:00.140 "num_base_bdevs": 3, 00:10:00.140 "num_base_bdevs_discovered": 1, 00:10:00.140 "num_base_bdevs_operational": 3, 00:10:00.140 "base_bdevs_list": [ 00:10:00.140 { 00:10:00.140 "name": "BaseBdev1", 00:10:00.140 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:00.140 "is_configured": true, 00:10:00.140 "data_offset": 2048, 00:10:00.140 "data_size": 63488 00:10:00.140 }, 00:10:00.140 { 00:10:00.140 "name": null, 00:10:00.140 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:10:00.140 "is_configured": false, 00:10:00.140 "data_offset": 0, 00:10:00.140 "data_size": 63488 00:10:00.140 }, 00:10:00.140 { 00:10:00.140 "name": null, 00:10:00.140 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:10:00.140 "is_configured": false, 00:10:00.140 "data_offset": 0, 00:10:00.140 "data_size": 63488 00:10:00.140 } 00:10:00.140 ] 00:10:00.140 }' 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.140 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:00.397 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.655 [2024-11-08 16:51:29.930277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.655 "name": "Existed_Raid", 00:10:00.655 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:00.655 "strip_size_kb": 64, 00:10:00.655 "state": "configuring", 00:10:00.655 "raid_level": "raid0", 00:10:00.655 "superblock": true, 00:10:00.655 "num_base_bdevs": 3, 00:10:00.655 "num_base_bdevs_discovered": 2, 00:10:00.655 "num_base_bdevs_operational": 3, 00:10:00.655 "base_bdevs_list": [ 00:10:00.655 { 00:10:00.655 "name": "BaseBdev1", 00:10:00.655 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:00.655 "is_configured": true, 00:10:00.655 "data_offset": 2048, 00:10:00.655 "data_size": 63488 00:10:00.655 }, 00:10:00.655 { 00:10:00.655 "name": null, 00:10:00.655 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:10:00.655 "is_configured": false, 00:10:00.655 "data_offset": 0, 00:10:00.655 "data_size": 63488 00:10:00.655 }, 00:10:00.655 { 00:10:00.655 "name": "BaseBdev3", 00:10:00.655 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:10:00.655 "is_configured": true, 00:10:00.655 "data_offset": 2048, 00:10:00.655 "data_size": 63488 00:10:00.655 } 00:10:00.655 ] 00:10:00.655 }' 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.655 16:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.913 [2024-11-08 16:51:30.417434] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.913 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.172 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.172 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.172 "name": "Existed_Raid", 00:10:01.172 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:01.172 "strip_size_kb": 64, 00:10:01.172 "state": "configuring", 00:10:01.172 "raid_level": "raid0", 00:10:01.172 "superblock": true, 00:10:01.172 "num_base_bdevs": 3, 00:10:01.172 "num_base_bdevs_discovered": 1, 00:10:01.172 "num_base_bdevs_operational": 3, 00:10:01.172 "base_bdevs_list": [ 00:10:01.172 { 00:10:01.172 "name": null, 00:10:01.172 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:01.172 "is_configured": false, 00:10:01.172 "data_offset": 0, 00:10:01.172 "data_size": 63488 00:10:01.172 }, 00:10:01.172 { 00:10:01.172 "name": null, 00:10:01.173 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:10:01.173 "is_configured": false, 00:10:01.173 "data_offset": 0, 00:10:01.173 "data_size": 63488 00:10:01.173 }, 00:10:01.173 { 00:10:01.173 "name": "BaseBdev3", 00:10:01.173 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:10:01.173 "is_configured": true, 00:10:01.173 "data_offset": 2048, 00:10:01.173 "data_size": 63488 00:10:01.173 } 00:10:01.173 ] 00:10:01.173 }' 00:10:01.173 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.173 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.431 [2024-11-08 16:51:30.919212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.431 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.690 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.690 "name": "Existed_Raid", 00:10:01.690 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:01.690 "strip_size_kb": 64, 00:10:01.690 "state": "configuring", 00:10:01.690 "raid_level": "raid0", 00:10:01.690 "superblock": true, 00:10:01.690 "num_base_bdevs": 3, 00:10:01.690 "num_base_bdevs_discovered": 2, 00:10:01.690 "num_base_bdevs_operational": 3, 00:10:01.690 "base_bdevs_list": [ 00:10:01.690 { 00:10:01.690 "name": null, 00:10:01.690 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:01.690 "is_configured": false, 00:10:01.690 "data_offset": 0, 00:10:01.690 "data_size": 63488 00:10:01.690 }, 00:10:01.690 { 00:10:01.690 "name": "BaseBdev2", 00:10:01.690 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:10:01.690 "is_configured": true, 00:10:01.690 "data_offset": 2048, 00:10:01.690 "data_size": 63488 00:10:01.690 }, 00:10:01.690 { 00:10:01.690 "name": "BaseBdev3", 00:10:01.690 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:10:01.690 "is_configured": true, 00:10:01.690 "data_offset": 2048, 00:10:01.690 "data_size": 63488 00:10:01.690 } 00:10:01.690 ] 00:10:01.690 }' 00:10:01.690 16:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.690 16:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fdfe7dca-cbab-4e84-8be4-17fb588302e2 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 NewBaseBdev 00:10:01.950 [2024-11-08 16:51:31.369355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:01.950 [2024-11-08 16:51:31.369519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:01.950 [2024-11-08 16:51:31.369535] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:01.950 [2024-11-08 16:51:31.369823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:01.950 [2024-11-08 16:51:31.369941] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:01.950 [2024-11-08 16:51:31.369950] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:01.950 [2024-11-08 16:51:31.370049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 [ 00:10:01.950 { 00:10:01.950 "name": "NewBaseBdev", 00:10:01.950 "aliases": [ 00:10:01.950 "fdfe7dca-cbab-4e84-8be4-17fb588302e2" 00:10:01.950 ], 00:10:01.950 "product_name": "Malloc disk", 00:10:01.950 "block_size": 512, 00:10:01.950 "num_blocks": 65536, 00:10:01.950 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:01.950 "assigned_rate_limits": { 00:10:01.950 "rw_ios_per_sec": 0, 00:10:01.950 "rw_mbytes_per_sec": 0, 00:10:01.950 "r_mbytes_per_sec": 0, 00:10:01.950 "w_mbytes_per_sec": 0 00:10:01.950 }, 00:10:01.950 "claimed": true, 00:10:01.950 "claim_type": "exclusive_write", 00:10:01.950 "zoned": false, 00:10:01.950 "supported_io_types": { 00:10:01.950 "read": true, 00:10:01.950 "write": true, 00:10:01.950 "unmap": true, 00:10:01.950 "flush": true, 00:10:01.950 "reset": true, 00:10:01.950 "nvme_admin": false, 00:10:01.950 "nvme_io": false, 00:10:01.950 "nvme_io_md": false, 00:10:01.950 "write_zeroes": true, 00:10:01.950 "zcopy": true, 00:10:01.950 "get_zone_info": false, 00:10:01.950 "zone_management": false, 00:10:01.950 "zone_append": false, 00:10:01.950 "compare": false, 00:10:01.950 "compare_and_write": false, 00:10:01.950 "abort": true, 00:10:01.950 "seek_hole": false, 00:10:01.950 "seek_data": false, 00:10:01.950 "copy": true, 00:10:01.950 "nvme_iov_md": false 00:10:01.950 }, 00:10:01.950 "memory_domains": [ 00:10:01.950 { 00:10:01.950 "dma_device_id": "system", 00:10:01.950 "dma_device_type": 1 00:10:01.950 }, 00:10:01.950 { 00:10:01.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.950 "dma_device_type": 2 00:10:01.950 } 00:10:01.950 ], 00:10:01.950 "driver_specific": {} 00:10:01.950 } 00:10:01.950 ] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.950 "name": "Existed_Raid", 00:10:01.950 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:01.950 "strip_size_kb": 64, 00:10:01.950 "state": "online", 00:10:01.950 "raid_level": "raid0", 00:10:01.950 "superblock": true, 00:10:01.950 "num_base_bdevs": 3, 00:10:01.950 "num_base_bdevs_discovered": 3, 00:10:01.950 "num_base_bdevs_operational": 3, 00:10:01.950 "base_bdevs_list": [ 00:10:01.950 { 00:10:01.950 "name": "NewBaseBdev", 00:10:01.950 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:01.950 "is_configured": true, 00:10:01.950 "data_offset": 2048, 00:10:01.950 "data_size": 63488 00:10:01.950 }, 00:10:01.950 { 00:10:01.950 "name": "BaseBdev2", 00:10:01.950 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:10:01.950 "is_configured": true, 00:10:01.950 "data_offset": 2048, 00:10:01.950 "data_size": 63488 00:10:01.950 }, 00:10:01.950 { 00:10:01.950 "name": "BaseBdev3", 00:10:01.950 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:10:01.950 "is_configured": true, 00:10:01.950 "data_offset": 2048, 00:10:01.950 "data_size": 63488 00:10:01.950 } 00:10:01.950 ] 00:10:01.950 }' 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.950 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.518 [2024-11-08 16:51:31.832930] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.518 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.518 "name": "Existed_Raid", 00:10:02.518 "aliases": [ 00:10:02.518 "0f6be334-f1ff-4d7f-836f-f8c437c2380b" 00:10:02.518 ], 00:10:02.518 "product_name": "Raid Volume", 00:10:02.518 "block_size": 512, 00:10:02.518 "num_blocks": 190464, 00:10:02.518 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:02.518 "assigned_rate_limits": { 00:10:02.518 "rw_ios_per_sec": 0, 00:10:02.518 "rw_mbytes_per_sec": 0, 00:10:02.518 "r_mbytes_per_sec": 0, 00:10:02.518 "w_mbytes_per_sec": 0 00:10:02.519 }, 00:10:02.519 "claimed": false, 00:10:02.519 "zoned": false, 00:10:02.519 "supported_io_types": { 00:10:02.519 "read": true, 00:10:02.519 "write": true, 00:10:02.519 "unmap": true, 00:10:02.519 "flush": true, 00:10:02.519 "reset": true, 00:10:02.519 "nvme_admin": false, 00:10:02.519 "nvme_io": false, 00:10:02.519 "nvme_io_md": false, 00:10:02.519 "write_zeroes": true, 00:10:02.519 "zcopy": false, 00:10:02.519 "get_zone_info": false, 00:10:02.519 "zone_management": false, 00:10:02.519 "zone_append": false, 00:10:02.519 "compare": false, 00:10:02.519 "compare_and_write": false, 00:10:02.519 "abort": false, 00:10:02.519 "seek_hole": false, 00:10:02.519 "seek_data": false, 00:10:02.519 "copy": false, 00:10:02.519 "nvme_iov_md": false 00:10:02.519 }, 00:10:02.519 "memory_domains": [ 00:10:02.519 { 00:10:02.519 "dma_device_id": "system", 00:10:02.519 "dma_device_type": 1 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.519 "dma_device_type": 2 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "dma_device_id": "system", 00:10:02.519 "dma_device_type": 1 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.519 "dma_device_type": 2 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "dma_device_id": "system", 00:10:02.519 "dma_device_type": 1 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.519 "dma_device_type": 2 00:10:02.519 } 00:10:02.519 ], 00:10:02.519 "driver_specific": { 00:10:02.519 "raid": { 00:10:02.519 "uuid": "0f6be334-f1ff-4d7f-836f-f8c437c2380b", 00:10:02.519 "strip_size_kb": 64, 00:10:02.519 "state": "online", 00:10:02.519 "raid_level": "raid0", 00:10:02.519 "superblock": true, 00:10:02.519 "num_base_bdevs": 3, 00:10:02.519 "num_base_bdevs_discovered": 3, 00:10:02.519 "num_base_bdevs_operational": 3, 00:10:02.519 "base_bdevs_list": [ 00:10:02.519 { 00:10:02.519 "name": "NewBaseBdev", 00:10:02.519 "uuid": "fdfe7dca-cbab-4e84-8be4-17fb588302e2", 00:10:02.519 "is_configured": true, 00:10:02.519 "data_offset": 2048, 00:10:02.519 "data_size": 63488 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "name": "BaseBdev2", 00:10:02.519 "uuid": "29e86df5-8d3c-4a0e-9a7d-32b66ba4325c", 00:10:02.519 "is_configured": true, 00:10:02.519 "data_offset": 2048, 00:10:02.519 "data_size": 63488 00:10:02.519 }, 00:10:02.519 { 00:10:02.519 "name": "BaseBdev3", 00:10:02.519 "uuid": "83c22c46-7aad-42f5-b6c4-12f5a591a217", 00:10:02.519 "is_configured": true, 00:10:02.519 "data_offset": 2048, 00:10:02.519 "data_size": 63488 00:10:02.519 } 00:10:02.519 ] 00:10:02.519 } 00:10:02.519 } 00:10:02.519 }' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:02.519 BaseBdev2 00:10:02.519 BaseBdev3' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.519 16:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.519 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.779 [2024-11-08 16:51:32.068204] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.779 [2024-11-08 16:51:32.068238] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.779 [2024-11-08 16:51:32.068312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.779 [2024-11-08 16:51:32.068378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.779 [2024-11-08 16:51:32.068391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75640 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75640 ']' 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75640 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75640 00:10:02.779 killing process with pid 75640 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75640' 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75640 00:10:02.779 [2024-11-08 16:51:32.116490] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.779 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75640 00:10:02.779 [2024-11-08 16:51:32.147090] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.037 16:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.037 00:10:03.038 real 0m8.740s 00:10:03.038 user 0m14.932s 00:10:03.038 sys 0m1.724s 00:10:03.038 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.038 16:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.038 ************************************ 00:10:03.038 END TEST raid_state_function_test_sb 00:10:03.038 ************************************ 00:10:03.038 16:51:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:03.038 16:51:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:03.038 16:51:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.038 16:51:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.038 ************************************ 00:10:03.038 START TEST raid_superblock_test 00:10:03.038 ************************************ 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76238 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76238 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76238 ']' 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.038 16:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.038 [2024-11-08 16:51:32.540263] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.038 [2024-11-08 16:51:32.540791] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76238 ] 00:10:03.296 [2024-11-08 16:51:32.701158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.296 [2024-11-08 16:51:32.747739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.296 [2024-11-08 16:51:32.789499] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.296 [2024-11-08 16:51:32.789542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.863 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.864 malloc1 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.864 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.123 [2024-11-08 16:51:33.391535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.123 [2024-11-08 16:51:33.391623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.123 [2024-11-08 16:51:33.391659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:04.123 [2024-11-08 16:51:33.391676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.123 [2024-11-08 16:51:33.393792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.123 [2024-11-08 16:51:33.393827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.123 pt1 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.123 malloc2 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.123 [2024-11-08 16:51:33.430388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.123 [2024-11-08 16:51:33.430461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.123 [2024-11-08 16:51:33.430488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:04.123 [2024-11-08 16:51:33.430502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.123 [2024-11-08 16:51:33.433177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.123 [2024-11-08 16:51:33.433218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.123 pt2 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.123 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.124 malloc3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.124 [2024-11-08 16:51:33.466845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.124 [2024-11-08 16:51:33.466894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.124 [2024-11-08 16:51:33.466912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:04.124 [2024-11-08 16:51:33.466922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.124 [2024-11-08 16:51:33.468911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.124 [2024-11-08 16:51:33.468946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.124 pt3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.124 [2024-11-08 16:51:33.478863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.124 [2024-11-08 16:51:33.480650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.124 [2024-11-08 16:51:33.480719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.124 [2024-11-08 16:51:33.480856] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:04.124 [2024-11-08 16:51:33.480867] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.124 [2024-11-08 16:51:33.481109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:04.124 [2024-11-08 16:51:33.481245] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:04.124 [2024-11-08 16:51:33.481268] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:04.124 [2024-11-08 16:51:33.481392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.124 "name": "raid_bdev1", 00:10:04.124 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:04.124 "strip_size_kb": 64, 00:10:04.124 "state": "online", 00:10:04.124 "raid_level": "raid0", 00:10:04.124 "superblock": true, 00:10:04.124 "num_base_bdevs": 3, 00:10:04.124 "num_base_bdevs_discovered": 3, 00:10:04.124 "num_base_bdevs_operational": 3, 00:10:04.124 "base_bdevs_list": [ 00:10:04.124 { 00:10:04.124 "name": "pt1", 00:10:04.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.124 "is_configured": true, 00:10:04.124 "data_offset": 2048, 00:10:04.124 "data_size": 63488 00:10:04.124 }, 00:10:04.124 { 00:10:04.124 "name": "pt2", 00:10:04.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.124 "is_configured": true, 00:10:04.124 "data_offset": 2048, 00:10:04.124 "data_size": 63488 00:10:04.124 }, 00:10:04.124 { 00:10:04.124 "name": "pt3", 00:10:04.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.124 "is_configured": true, 00:10:04.124 "data_offset": 2048, 00:10:04.124 "data_size": 63488 00:10:04.124 } 00:10:04.124 ] 00:10:04.124 }' 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.124 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.384 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.384 [2024-11-08 16:51:33.902468] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.644 16:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.644 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.644 "name": "raid_bdev1", 00:10:04.644 "aliases": [ 00:10:04.644 "779fa18c-cbc7-426a-bd89-a4d51395b0ad" 00:10:04.644 ], 00:10:04.644 "product_name": "Raid Volume", 00:10:04.644 "block_size": 512, 00:10:04.644 "num_blocks": 190464, 00:10:04.644 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:04.644 "assigned_rate_limits": { 00:10:04.644 "rw_ios_per_sec": 0, 00:10:04.644 "rw_mbytes_per_sec": 0, 00:10:04.644 "r_mbytes_per_sec": 0, 00:10:04.644 "w_mbytes_per_sec": 0 00:10:04.644 }, 00:10:04.644 "claimed": false, 00:10:04.644 "zoned": false, 00:10:04.644 "supported_io_types": { 00:10:04.644 "read": true, 00:10:04.644 "write": true, 00:10:04.644 "unmap": true, 00:10:04.644 "flush": true, 00:10:04.644 "reset": true, 00:10:04.644 "nvme_admin": false, 00:10:04.644 "nvme_io": false, 00:10:04.644 "nvme_io_md": false, 00:10:04.644 "write_zeroes": true, 00:10:04.644 "zcopy": false, 00:10:04.644 "get_zone_info": false, 00:10:04.644 "zone_management": false, 00:10:04.644 "zone_append": false, 00:10:04.644 "compare": false, 00:10:04.644 "compare_and_write": false, 00:10:04.644 "abort": false, 00:10:04.644 "seek_hole": false, 00:10:04.644 "seek_data": false, 00:10:04.644 "copy": false, 00:10:04.644 "nvme_iov_md": false 00:10:04.644 }, 00:10:04.644 "memory_domains": [ 00:10:04.644 { 00:10:04.644 "dma_device_id": "system", 00:10:04.644 "dma_device_type": 1 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.644 "dma_device_type": 2 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "dma_device_id": "system", 00:10:04.644 "dma_device_type": 1 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.644 "dma_device_type": 2 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "dma_device_id": "system", 00:10:04.644 "dma_device_type": 1 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.644 "dma_device_type": 2 00:10:04.644 } 00:10:04.644 ], 00:10:04.644 "driver_specific": { 00:10:04.644 "raid": { 00:10:04.644 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:04.644 "strip_size_kb": 64, 00:10:04.644 "state": "online", 00:10:04.644 "raid_level": "raid0", 00:10:04.644 "superblock": true, 00:10:04.644 "num_base_bdevs": 3, 00:10:04.644 "num_base_bdevs_discovered": 3, 00:10:04.644 "num_base_bdevs_operational": 3, 00:10:04.644 "base_bdevs_list": [ 00:10:04.644 { 00:10:04.644 "name": "pt1", 00:10:04.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.644 "is_configured": true, 00:10:04.644 "data_offset": 2048, 00:10:04.644 "data_size": 63488 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "name": "pt2", 00:10:04.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.644 "is_configured": true, 00:10:04.644 "data_offset": 2048, 00:10:04.644 "data_size": 63488 00:10:04.644 }, 00:10:04.644 { 00:10:04.644 "name": "pt3", 00:10:04.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.644 "is_configured": true, 00:10:04.644 "data_offset": 2048, 00:10:04.644 "data_size": 63488 00:10:04.644 } 00:10:04.644 ] 00:10:04.644 } 00:10:04.644 } 00:10:04.644 }' 00:10:04.644 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.644 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:04.644 pt2 00:10:04.644 pt3' 00:10:04.644 16:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.644 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.645 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.905 [2024-11-08 16:51:34.173898] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.905 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.905 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=779fa18c-cbc7-426a-bd89-a4d51395b0ad 00:10:04.905 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 779fa18c-cbc7-426a-bd89-a4d51395b0ad ']' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 [2024-11-08 16:51:34.221550] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.906 [2024-11-08 16:51:34.221583] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.906 [2024-11-08 16:51:34.221676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.906 [2024-11-08 16:51:34.221740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.906 [2024-11-08 16:51:34.221755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 [2024-11-08 16:51:34.365333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:04.906 [2024-11-08 16:51:34.367273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:04.906 [2024-11-08 16:51:34.367354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:04.906 [2024-11-08 16:51:34.367403] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:04.906 [2024-11-08 16:51:34.367444] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:04.906 [2024-11-08 16:51:34.367463] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:04.906 [2024-11-08 16:51:34.367477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.906 [2024-11-08 16:51:34.367487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:04.906 request: 00:10:04.906 { 00:10:04.906 "name": "raid_bdev1", 00:10:04.906 "raid_level": "raid0", 00:10:04.906 "base_bdevs": [ 00:10:04.906 "malloc1", 00:10:04.906 "malloc2", 00:10:04.906 "malloc3" 00:10:04.906 ], 00:10:04.906 "strip_size_kb": 64, 00:10:04.906 "superblock": false, 00:10:04.906 "method": "bdev_raid_create", 00:10:04.906 "req_id": 1 00:10:04.906 } 00:10:04.906 Got JSON-RPC error response 00:10:04.906 response: 00:10:04.906 { 00:10:04.906 "code": -17, 00:10:04.906 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:04.906 } 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 [2024-11-08 16:51:34.421196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.906 [2024-11-08 16:51:34.421251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.906 [2024-11-08 16:51:34.421269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:04.906 [2024-11-08 16:51:34.421279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.906 [2024-11-08 16:51:34.423392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.906 [2024-11-08 16:51:34.423431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.906 [2024-11-08 16:51:34.423499] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:04.906 [2024-11-08 16:51:34.423550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.906 pt1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.906 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.167 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.167 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.167 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.167 "name": "raid_bdev1", 00:10:05.167 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:05.167 "strip_size_kb": 64, 00:10:05.167 "state": "configuring", 00:10:05.167 "raid_level": "raid0", 00:10:05.167 "superblock": true, 00:10:05.167 "num_base_bdevs": 3, 00:10:05.167 "num_base_bdevs_discovered": 1, 00:10:05.167 "num_base_bdevs_operational": 3, 00:10:05.167 "base_bdevs_list": [ 00:10:05.167 { 00:10:05.167 "name": "pt1", 00:10:05.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.167 "is_configured": true, 00:10:05.167 "data_offset": 2048, 00:10:05.167 "data_size": 63488 00:10:05.167 }, 00:10:05.167 { 00:10:05.167 "name": null, 00:10:05.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.167 "is_configured": false, 00:10:05.167 "data_offset": 2048, 00:10:05.167 "data_size": 63488 00:10:05.167 }, 00:10:05.167 { 00:10:05.167 "name": null, 00:10:05.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.167 "is_configured": false, 00:10:05.167 "data_offset": 2048, 00:10:05.167 "data_size": 63488 00:10:05.167 } 00:10:05.167 ] 00:10:05.167 }' 00:10:05.167 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.167 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.431 [2024-11-08 16:51:34.856515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:05.431 [2024-11-08 16:51:34.856591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.431 [2024-11-08 16:51:34.856613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:05.431 [2024-11-08 16:51:34.856628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.431 [2024-11-08 16:51:34.857100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.431 [2024-11-08 16:51:34.857123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:05.431 [2024-11-08 16:51:34.857200] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:05.431 [2024-11-08 16:51:34.857225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:05.431 pt2 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.431 [2024-11-08 16:51:34.868488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.431 "name": "raid_bdev1", 00:10:05.431 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:05.431 "strip_size_kb": 64, 00:10:05.431 "state": "configuring", 00:10:05.431 "raid_level": "raid0", 00:10:05.431 "superblock": true, 00:10:05.431 "num_base_bdevs": 3, 00:10:05.431 "num_base_bdevs_discovered": 1, 00:10:05.431 "num_base_bdevs_operational": 3, 00:10:05.431 "base_bdevs_list": [ 00:10:05.431 { 00:10:05.431 "name": "pt1", 00:10:05.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.431 "is_configured": true, 00:10:05.431 "data_offset": 2048, 00:10:05.431 "data_size": 63488 00:10:05.431 }, 00:10:05.431 { 00:10:05.431 "name": null, 00:10:05.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.431 "is_configured": false, 00:10:05.431 "data_offset": 0, 00:10:05.431 "data_size": 63488 00:10:05.431 }, 00:10:05.431 { 00:10:05.431 "name": null, 00:10:05.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.431 "is_configured": false, 00:10:05.431 "data_offset": 2048, 00:10:05.431 "data_size": 63488 00:10:05.431 } 00:10:05.431 ] 00:10:05.431 }' 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.431 16:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.015 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:06.015 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.015 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:06.015 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.015 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.015 [2024-11-08 16:51:35.315806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:06.015 [2024-11-08 16:51:35.315888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.015 [2024-11-08 16:51:35.315910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:06.015 [2024-11-08 16:51:35.315922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.015 [2024-11-08 16:51:35.316361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.015 [2024-11-08 16:51:35.316379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:06.015 [2024-11-08 16:51:35.316464] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:06.015 [2024-11-08 16:51:35.316488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:06.016 pt2 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.016 [2024-11-08 16:51:35.327771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:06.016 [2024-11-08 16:51:35.327842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.016 [2024-11-08 16:51:35.327864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:06.016 [2024-11-08 16:51:35.327874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.016 [2024-11-08 16:51:35.328315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.016 [2024-11-08 16:51:35.328340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:06.016 [2024-11-08 16:51:35.328424] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:06.016 [2024-11-08 16:51:35.328452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:06.016 [2024-11-08 16:51:35.328567] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:06.016 [2024-11-08 16:51:35.328577] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:06.016 [2024-11-08 16:51:35.328848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:06.016 [2024-11-08 16:51:35.328972] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:06.016 [2024-11-08 16:51:35.328984] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:06.016 [2024-11-08 16:51:35.329100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.016 pt3 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.016 "name": "raid_bdev1", 00:10:06.016 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:06.016 "strip_size_kb": 64, 00:10:06.016 "state": "online", 00:10:06.016 "raid_level": "raid0", 00:10:06.016 "superblock": true, 00:10:06.016 "num_base_bdevs": 3, 00:10:06.016 "num_base_bdevs_discovered": 3, 00:10:06.016 "num_base_bdevs_operational": 3, 00:10:06.016 "base_bdevs_list": [ 00:10:06.016 { 00:10:06.016 "name": "pt1", 00:10:06.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.016 "is_configured": true, 00:10:06.016 "data_offset": 2048, 00:10:06.016 "data_size": 63488 00:10:06.016 }, 00:10:06.016 { 00:10:06.016 "name": "pt2", 00:10:06.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.016 "is_configured": true, 00:10:06.016 "data_offset": 2048, 00:10:06.016 "data_size": 63488 00:10:06.016 }, 00:10:06.016 { 00:10:06.016 "name": "pt3", 00:10:06.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.016 "is_configured": true, 00:10:06.016 "data_offset": 2048, 00:10:06.016 "data_size": 63488 00:10:06.016 } 00:10:06.016 ] 00:10:06.016 }' 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.016 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.276 [2024-11-08 16:51:35.755346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.276 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.276 "name": "raid_bdev1", 00:10:06.276 "aliases": [ 00:10:06.276 "779fa18c-cbc7-426a-bd89-a4d51395b0ad" 00:10:06.276 ], 00:10:06.276 "product_name": "Raid Volume", 00:10:06.276 "block_size": 512, 00:10:06.276 "num_blocks": 190464, 00:10:06.276 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:06.276 "assigned_rate_limits": { 00:10:06.276 "rw_ios_per_sec": 0, 00:10:06.276 "rw_mbytes_per_sec": 0, 00:10:06.276 "r_mbytes_per_sec": 0, 00:10:06.276 "w_mbytes_per_sec": 0 00:10:06.276 }, 00:10:06.276 "claimed": false, 00:10:06.276 "zoned": false, 00:10:06.276 "supported_io_types": { 00:10:06.276 "read": true, 00:10:06.276 "write": true, 00:10:06.276 "unmap": true, 00:10:06.276 "flush": true, 00:10:06.276 "reset": true, 00:10:06.276 "nvme_admin": false, 00:10:06.276 "nvme_io": false, 00:10:06.276 "nvme_io_md": false, 00:10:06.276 "write_zeroes": true, 00:10:06.276 "zcopy": false, 00:10:06.276 "get_zone_info": false, 00:10:06.276 "zone_management": false, 00:10:06.276 "zone_append": false, 00:10:06.276 "compare": false, 00:10:06.276 "compare_and_write": false, 00:10:06.276 "abort": false, 00:10:06.276 "seek_hole": false, 00:10:06.276 "seek_data": false, 00:10:06.276 "copy": false, 00:10:06.276 "nvme_iov_md": false 00:10:06.276 }, 00:10:06.276 "memory_domains": [ 00:10:06.276 { 00:10:06.276 "dma_device_id": "system", 00:10:06.276 "dma_device_type": 1 00:10:06.276 }, 00:10:06.276 { 00:10:06.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.276 "dma_device_type": 2 00:10:06.276 }, 00:10:06.276 { 00:10:06.276 "dma_device_id": "system", 00:10:06.276 "dma_device_type": 1 00:10:06.276 }, 00:10:06.276 { 00:10:06.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.276 "dma_device_type": 2 00:10:06.276 }, 00:10:06.276 { 00:10:06.276 "dma_device_id": "system", 00:10:06.276 "dma_device_type": 1 00:10:06.276 }, 00:10:06.276 { 00:10:06.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.276 "dma_device_type": 2 00:10:06.276 } 00:10:06.276 ], 00:10:06.276 "driver_specific": { 00:10:06.276 "raid": { 00:10:06.276 "uuid": "779fa18c-cbc7-426a-bd89-a4d51395b0ad", 00:10:06.276 "strip_size_kb": 64, 00:10:06.276 "state": "online", 00:10:06.276 "raid_level": "raid0", 00:10:06.276 "superblock": true, 00:10:06.276 "num_base_bdevs": 3, 00:10:06.276 "num_base_bdevs_discovered": 3, 00:10:06.276 "num_base_bdevs_operational": 3, 00:10:06.276 "base_bdevs_list": [ 00:10:06.276 { 00:10:06.276 "name": "pt1", 00:10:06.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.276 "is_configured": true, 00:10:06.276 "data_offset": 2048, 00:10:06.276 "data_size": 63488 00:10:06.276 }, 00:10:06.276 { 00:10:06.276 "name": "pt2", 00:10:06.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.277 "is_configured": true, 00:10:06.277 "data_offset": 2048, 00:10:06.277 "data_size": 63488 00:10:06.277 }, 00:10:06.277 { 00:10:06.277 "name": "pt3", 00:10:06.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.277 "is_configured": true, 00:10:06.277 "data_offset": 2048, 00:10:06.277 "data_size": 63488 00:10:06.277 } 00:10:06.277 ] 00:10:06.277 } 00:10:06.277 } 00:10:06.277 }' 00:10:06.277 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:06.536 pt2 00:10:06.536 pt3' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 16:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 [2024-11-08 16:51:36.018919] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 779fa18c-cbc7-426a-bd89-a4d51395b0ad '!=' 779fa18c-cbc7-426a-bd89-a4d51395b0ad ']' 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76238 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76238 ']' 00:10:06.536 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76238 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76238 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.796 killing process with pid 76238 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76238' 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76238 00:10:06.796 [2024-11-08 16:51:36.102079] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.796 [2024-11-08 16:51:36.102209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.796 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76238 00:10:06.796 [2024-11-08 16:51:36.102283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.796 [2024-11-08 16:51:36.102294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:06.796 [2024-11-08 16:51:36.136198] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.055 16:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:07.055 00:10:07.055 real 0m3.926s 00:10:07.055 user 0m6.217s 00:10:07.055 sys 0m0.803s 00:10:07.055 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.055 16:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 ************************************ 00:10:07.055 END TEST raid_superblock_test 00:10:07.055 ************************************ 00:10:07.055 16:51:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:07.055 16:51:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:07.055 16:51:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.055 16:51:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 ************************************ 00:10:07.055 START TEST raid_read_error_test 00:10:07.055 ************************************ 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2M7X7jEY3P 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76480 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76480 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76480 ']' 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.055 16:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 [2024-11-08 16:51:36.544401] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:07.055 [2024-11-08 16:51:36.544556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76480 ] 00:10:07.315 [2024-11-08 16:51:36.700629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.315 [2024-11-08 16:51:36.746901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.315 [2024-11-08 16:51:36.790582] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.315 [2024-11-08 16:51:36.790616] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 BaseBdev1_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 true 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 [2024-11-08 16:51:37.449323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:08.254 [2024-11-08 16:51:37.449371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.254 [2024-11-08 16:51:37.449389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:08.254 [2024-11-08 16:51:37.449398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.254 [2024-11-08 16:51:37.451534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.254 [2024-11-08 16:51:37.451573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:08.254 BaseBdev1 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 BaseBdev2_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 true 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 [2024-11-08 16:51:37.499003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:08.254 [2024-11-08 16:51:37.499050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.254 [2024-11-08 16:51:37.499084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:08.254 [2024-11-08 16:51:37.499092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.254 [2024-11-08 16:51:37.501073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.254 [2024-11-08 16:51:37.501105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:08.254 BaseBdev2 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 BaseBdev3_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 true 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 [2024-11-08 16:51:37.539449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:08.254 [2024-11-08 16:51:37.539493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.254 [2024-11-08 16:51:37.539510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:08.254 [2024-11-08 16:51:37.539518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.254 [2024-11-08 16:51:37.541466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.254 [2024-11-08 16:51:37.541499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:08.254 BaseBdev3 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 [2024-11-08 16:51:37.551489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.254 [2024-11-08 16:51:37.553273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.254 [2024-11-08 16:51:37.553368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.254 [2024-11-08 16:51:37.553528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:08.254 [2024-11-08 16:51:37.553547] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.254 [2024-11-08 16:51:37.553804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:08.254 [2024-11-08 16:51:37.553941] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:08.254 [2024-11-08 16:51:37.553958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:08.254 [2024-11-08 16:51:37.554095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.254 "name": "raid_bdev1", 00:10:08.254 "uuid": "8112b7d8-1114-4981-a328-c88305738850", 00:10:08.254 "strip_size_kb": 64, 00:10:08.254 "state": "online", 00:10:08.254 "raid_level": "raid0", 00:10:08.254 "superblock": true, 00:10:08.254 "num_base_bdevs": 3, 00:10:08.254 "num_base_bdevs_discovered": 3, 00:10:08.254 "num_base_bdevs_operational": 3, 00:10:08.254 "base_bdevs_list": [ 00:10:08.254 { 00:10:08.254 "name": "BaseBdev1", 00:10:08.254 "uuid": "bf0d349d-d863-553e-bfa2-1d19a85ce4ce", 00:10:08.254 "is_configured": true, 00:10:08.254 "data_offset": 2048, 00:10:08.254 "data_size": 63488 00:10:08.254 }, 00:10:08.254 { 00:10:08.254 "name": "BaseBdev2", 00:10:08.255 "uuid": "de10f6b5-adfa-5bac-bebb-6b64698a6433", 00:10:08.255 "is_configured": true, 00:10:08.255 "data_offset": 2048, 00:10:08.255 "data_size": 63488 00:10:08.255 }, 00:10:08.255 { 00:10:08.255 "name": "BaseBdev3", 00:10:08.255 "uuid": "c3a3746f-4320-5926-8c13-ee9d0f6a84b1", 00:10:08.255 "is_configured": true, 00:10:08.255 "data_offset": 2048, 00:10:08.255 "data_size": 63488 00:10:08.255 } 00:10:08.255 ] 00:10:08.255 }' 00:10:08.255 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.255 16:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.513 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:08.513 16:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.771 [2024-11-08 16:51:38.086981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:09.707 16:51:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:09.707 16:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.707 16:51:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.707 "name": "raid_bdev1", 00:10:09.707 "uuid": "8112b7d8-1114-4981-a328-c88305738850", 00:10:09.707 "strip_size_kb": 64, 00:10:09.707 "state": "online", 00:10:09.707 "raid_level": "raid0", 00:10:09.707 "superblock": true, 00:10:09.707 "num_base_bdevs": 3, 00:10:09.707 "num_base_bdevs_discovered": 3, 00:10:09.707 "num_base_bdevs_operational": 3, 00:10:09.707 "base_bdevs_list": [ 00:10:09.707 { 00:10:09.707 "name": "BaseBdev1", 00:10:09.707 "uuid": "bf0d349d-d863-553e-bfa2-1d19a85ce4ce", 00:10:09.707 "is_configured": true, 00:10:09.707 "data_offset": 2048, 00:10:09.707 "data_size": 63488 00:10:09.707 }, 00:10:09.707 { 00:10:09.707 "name": "BaseBdev2", 00:10:09.707 "uuid": "de10f6b5-adfa-5bac-bebb-6b64698a6433", 00:10:09.707 "is_configured": true, 00:10:09.707 "data_offset": 2048, 00:10:09.707 "data_size": 63488 00:10:09.707 }, 00:10:09.707 { 00:10:09.707 "name": "BaseBdev3", 00:10:09.707 "uuid": "c3a3746f-4320-5926-8c13-ee9d0f6a84b1", 00:10:09.707 "is_configured": true, 00:10:09.707 "data_offset": 2048, 00:10:09.707 "data_size": 63488 00:10:09.707 } 00:10:09.707 ] 00:10:09.707 }' 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.707 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.967 [2024-11-08 16:51:39.447045] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.967 [2024-11-08 16:51:39.447084] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.967 [2024-11-08 16:51:39.449804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.967 [2024-11-08 16:51:39.449857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.967 [2024-11-08 16:51:39.449892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.967 [2024-11-08 16:51:39.449905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:09.967 { 00:10:09.967 "results": [ 00:10:09.967 { 00:10:09.967 "job": "raid_bdev1", 00:10:09.967 "core_mask": "0x1", 00:10:09.967 "workload": "randrw", 00:10:09.967 "percentage": 50, 00:10:09.967 "status": "finished", 00:10:09.967 "queue_depth": 1, 00:10:09.967 "io_size": 131072, 00:10:09.967 "runtime": 1.360896, 00:10:09.967 "iops": 16697.08780097818, 00:10:09.967 "mibps": 2087.1359751222726, 00:10:09.967 "io_failed": 1, 00:10:09.967 "io_timeout": 0, 00:10:09.967 "avg_latency_us": 83.0412604952231, 00:10:09.967 "min_latency_us": 19.116157205240174, 00:10:09.967 "max_latency_us": 1445.2262008733624 00:10:09.967 } 00:10:09.967 ], 00:10:09.967 "core_count": 1 00:10:09.967 } 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76480 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76480 ']' 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76480 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.967 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76480 00:10:10.226 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.227 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.227 killing process with pid 76480 00:10:10.227 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76480' 00:10:10.227 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76480 00:10:10.227 [2024-11-08 16:51:39.497538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.227 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76480 00:10:10.227 [2024-11-08 16:51:39.523556] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2M7X7jEY3P 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:10.507 00:10:10.507 real 0m3.326s 00:10:10.507 user 0m4.225s 00:10:10.507 sys 0m0.528s 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.507 16:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.507 ************************************ 00:10:10.507 END TEST raid_read_error_test 00:10:10.507 ************************************ 00:10:10.507 16:51:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:10.507 16:51:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:10.507 16:51:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.507 16:51:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.507 ************************************ 00:10:10.507 START TEST raid_write_error_test 00:10:10.507 ************************************ 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aWYLtCPcs7 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76615 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76615 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76615 ']' 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.507 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.508 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.508 16:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.508 [2024-11-08 16:51:39.939363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:10.508 [2024-11-08 16:51:39.939927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76615 ] 00:10:10.767 [2024-11-08 16:51:40.102024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.767 [2024-11-08 16:51:40.146843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.767 [2024-11-08 16:51:40.188897] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.767 [2024-11-08 16:51:40.188939] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.335 BaseBdev1_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.335 true 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.335 [2024-11-08 16:51:40.803698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:11.335 [2024-11-08 16:51:40.803751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.335 [2024-11-08 16:51:40.803780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:11.335 [2024-11-08 16:51:40.803789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.335 [2024-11-08 16:51:40.805947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.335 [2024-11-08 16:51:40.805982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:11.335 BaseBdev1 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.335 BaseBdev2_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.335 true 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.335 [2024-11-08 16:51:40.854160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:11.335 [2024-11-08 16:51:40.854212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.335 [2024-11-08 16:51:40.854232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:11.335 [2024-11-08 16:51:40.854241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.335 [2024-11-08 16:51:40.856518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.335 [2024-11-08 16:51:40.856552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:11.335 BaseBdev2 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.335 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.595 BaseBdev3_malloc 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.595 true 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.595 [2024-11-08 16:51:40.894647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:11.595 [2024-11-08 16:51:40.894692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.595 [2024-11-08 16:51:40.894710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:11.595 [2024-11-08 16:51:40.894719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.595 [2024-11-08 16:51:40.896777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.595 [2024-11-08 16:51:40.896820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:11.595 BaseBdev3 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.595 [2024-11-08 16:51:40.906684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.595 [2024-11-08 16:51:40.908537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.595 [2024-11-08 16:51:40.908636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.595 [2024-11-08 16:51:40.908810] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:11.595 [2024-11-08 16:51:40.908826] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:11.595 [2024-11-08 16:51:40.909081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:11.595 [2024-11-08 16:51:40.909223] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:11.595 [2024-11-08 16:51:40.909241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:11.595 [2024-11-08 16:51:40.909357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.595 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.595 "name": "raid_bdev1", 00:10:11.595 "uuid": "a2d279b1-ef7e-4a52-961f-13935d549d6c", 00:10:11.595 "strip_size_kb": 64, 00:10:11.595 "state": "online", 00:10:11.595 "raid_level": "raid0", 00:10:11.595 "superblock": true, 00:10:11.595 "num_base_bdevs": 3, 00:10:11.595 "num_base_bdevs_discovered": 3, 00:10:11.595 "num_base_bdevs_operational": 3, 00:10:11.595 "base_bdevs_list": [ 00:10:11.595 { 00:10:11.595 "name": "BaseBdev1", 00:10:11.595 "uuid": "cc41bde7-7635-594d-8e6a-e1ec89cd7527", 00:10:11.595 "is_configured": true, 00:10:11.595 "data_offset": 2048, 00:10:11.595 "data_size": 63488 00:10:11.595 }, 00:10:11.595 { 00:10:11.595 "name": "BaseBdev2", 00:10:11.595 "uuid": "79c0ebeb-8e9b-5d21-b6f4-f6249369d579", 00:10:11.595 "is_configured": true, 00:10:11.595 "data_offset": 2048, 00:10:11.595 "data_size": 63488 00:10:11.595 }, 00:10:11.595 { 00:10:11.595 "name": "BaseBdev3", 00:10:11.595 "uuid": "3d77be15-dbf1-58f1-b0a2-e47c8d8bb4c2", 00:10:11.595 "is_configured": true, 00:10:11.595 "data_offset": 2048, 00:10:11.595 "data_size": 63488 00:10:11.596 } 00:10:11.596 ] 00:10:11.596 }' 00:10:11.596 16:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.596 16:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.855 16:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:11.855 16:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:11.855 [2024-11-08 16:51:41.358265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.826 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.826 "name": "raid_bdev1", 00:10:12.826 "uuid": "a2d279b1-ef7e-4a52-961f-13935d549d6c", 00:10:12.826 "strip_size_kb": 64, 00:10:12.826 "state": "online", 00:10:12.826 "raid_level": "raid0", 00:10:12.826 "superblock": true, 00:10:12.826 "num_base_bdevs": 3, 00:10:12.826 "num_base_bdevs_discovered": 3, 00:10:12.826 "num_base_bdevs_operational": 3, 00:10:12.826 "base_bdevs_list": [ 00:10:12.826 { 00:10:12.826 "name": "BaseBdev1", 00:10:12.826 "uuid": "cc41bde7-7635-594d-8e6a-e1ec89cd7527", 00:10:12.826 "is_configured": true, 00:10:12.826 "data_offset": 2048, 00:10:12.826 "data_size": 63488 00:10:12.826 }, 00:10:12.826 { 00:10:12.826 "name": "BaseBdev2", 00:10:12.826 "uuid": "79c0ebeb-8e9b-5d21-b6f4-f6249369d579", 00:10:12.826 "is_configured": true, 00:10:12.826 "data_offset": 2048, 00:10:12.826 "data_size": 63488 00:10:12.826 }, 00:10:12.826 { 00:10:12.827 "name": "BaseBdev3", 00:10:12.827 "uuid": "3d77be15-dbf1-58f1-b0a2-e47c8d8bb4c2", 00:10:12.827 "is_configured": true, 00:10:12.827 "data_offset": 2048, 00:10:12.827 "data_size": 63488 00:10:12.827 } 00:10:12.827 ] 00:10:12.827 }' 00:10:12.827 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.827 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.397 [2024-11-08 16:51:42.730003] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.397 [2024-11-08 16:51:42.730045] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.397 [2024-11-08 16:51:42.732679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.397 [2024-11-08 16:51:42.732741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.397 [2024-11-08 16:51:42.732776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.397 [2024-11-08 16:51:42.732787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:13.397 { 00:10:13.397 "results": [ 00:10:13.397 { 00:10:13.397 "job": "raid_bdev1", 00:10:13.397 "core_mask": "0x1", 00:10:13.397 "workload": "randrw", 00:10:13.397 "percentage": 50, 00:10:13.397 "status": "finished", 00:10:13.397 "queue_depth": 1, 00:10:13.397 "io_size": 131072, 00:10:13.397 "runtime": 1.372514, 00:10:13.397 "iops": 16592.180480490544, 00:10:13.397 "mibps": 2074.022560061318, 00:10:13.397 "io_failed": 1, 00:10:13.397 "io_timeout": 0, 00:10:13.397 "avg_latency_us": 83.60356754024642, 00:10:13.397 "min_latency_us": 20.12227074235808, 00:10:13.397 "max_latency_us": 1645.5545851528384 00:10:13.397 } 00:10:13.397 ], 00:10:13.397 "core_count": 1 00:10:13.397 } 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76615 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76615 ']' 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76615 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76615 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.397 killing process with pid 76615 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76615' 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76615 00:10:13.397 [2024-11-08 16:51:42.767132] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.397 16:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76615 00:10:13.397 [2024-11-08 16:51:42.792586] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aWYLtCPcs7 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:13.657 00:10:13.657 real 0m3.202s 00:10:13.657 user 0m4.004s 00:10:13.657 sys 0m0.508s 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.657 16:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.657 ************************************ 00:10:13.657 END TEST raid_write_error_test 00:10:13.657 ************************************ 00:10:13.657 16:51:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:13.657 16:51:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:13.657 16:51:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.657 16:51:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.657 16:51:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.657 ************************************ 00:10:13.657 START TEST raid_state_function_test 00:10:13.657 ************************************ 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.657 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76742 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76742' 00:10:13.658 Process raid pid: 76742 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76742 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76742 ']' 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.658 16:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.918 [2024-11-08 16:51:43.204083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.918 [2024-11-08 16:51:43.204252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.918 [2024-11-08 16:51:43.366306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.918 [2024-11-08 16:51:43.412776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.178 [2024-11-08 16:51:43.454078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.178 [2024-11-08 16:51:43.454130] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.747 [2024-11-08 16:51:44.035284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.747 [2024-11-08 16:51:44.035338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.747 [2024-11-08 16:51:44.035370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.747 [2024-11-08 16:51:44.035379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.747 [2024-11-08 16:51:44.035388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.747 [2024-11-08 16:51:44.035399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.747 "name": "Existed_Raid", 00:10:14.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.747 "strip_size_kb": 64, 00:10:14.747 "state": "configuring", 00:10:14.747 "raid_level": "concat", 00:10:14.747 "superblock": false, 00:10:14.747 "num_base_bdevs": 3, 00:10:14.747 "num_base_bdevs_discovered": 0, 00:10:14.747 "num_base_bdevs_operational": 3, 00:10:14.747 "base_bdevs_list": [ 00:10:14.747 { 00:10:14.747 "name": "BaseBdev1", 00:10:14.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.747 "is_configured": false, 00:10:14.747 "data_offset": 0, 00:10:14.747 "data_size": 0 00:10:14.747 }, 00:10:14.747 { 00:10:14.747 "name": "BaseBdev2", 00:10:14.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.747 "is_configured": false, 00:10:14.747 "data_offset": 0, 00:10:14.747 "data_size": 0 00:10:14.747 }, 00:10:14.747 { 00:10:14.747 "name": "BaseBdev3", 00:10:14.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.747 "is_configured": false, 00:10:14.747 "data_offset": 0, 00:10:14.747 "data_size": 0 00:10:14.747 } 00:10:14.747 ] 00:10:14.747 }' 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.747 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.007 [2024-11-08 16:51:44.510411] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.007 [2024-11-08 16:51:44.510461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.007 [2024-11-08 16:51:44.518409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.007 [2024-11-08 16:51:44.518455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.007 [2024-11-08 16:51:44.518464] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.007 [2024-11-08 16:51:44.518473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.007 [2024-11-08 16:51:44.518479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.007 [2024-11-08 16:51:44.518488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.007 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.266 [2024-11-08 16:51:44.535199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.266 BaseBdev1 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.266 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.267 [ 00:10:15.267 { 00:10:15.267 "name": "BaseBdev1", 00:10:15.267 "aliases": [ 00:10:15.267 "f0aa2127-77a0-4286-8f99-dfe953c6375a" 00:10:15.267 ], 00:10:15.267 "product_name": "Malloc disk", 00:10:15.267 "block_size": 512, 00:10:15.267 "num_blocks": 65536, 00:10:15.267 "uuid": "f0aa2127-77a0-4286-8f99-dfe953c6375a", 00:10:15.267 "assigned_rate_limits": { 00:10:15.267 "rw_ios_per_sec": 0, 00:10:15.267 "rw_mbytes_per_sec": 0, 00:10:15.267 "r_mbytes_per_sec": 0, 00:10:15.267 "w_mbytes_per_sec": 0 00:10:15.267 }, 00:10:15.267 "claimed": true, 00:10:15.267 "claim_type": "exclusive_write", 00:10:15.267 "zoned": false, 00:10:15.267 "supported_io_types": { 00:10:15.267 "read": true, 00:10:15.267 "write": true, 00:10:15.267 "unmap": true, 00:10:15.267 "flush": true, 00:10:15.267 "reset": true, 00:10:15.267 "nvme_admin": false, 00:10:15.267 "nvme_io": false, 00:10:15.267 "nvme_io_md": false, 00:10:15.267 "write_zeroes": true, 00:10:15.267 "zcopy": true, 00:10:15.267 "get_zone_info": false, 00:10:15.267 "zone_management": false, 00:10:15.267 "zone_append": false, 00:10:15.267 "compare": false, 00:10:15.267 "compare_and_write": false, 00:10:15.267 "abort": true, 00:10:15.267 "seek_hole": false, 00:10:15.267 "seek_data": false, 00:10:15.267 "copy": true, 00:10:15.267 "nvme_iov_md": false 00:10:15.267 }, 00:10:15.267 "memory_domains": [ 00:10:15.267 { 00:10:15.267 "dma_device_id": "system", 00:10:15.267 "dma_device_type": 1 00:10:15.267 }, 00:10:15.267 { 00:10:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.267 "dma_device_type": 2 00:10:15.267 } 00:10:15.267 ], 00:10:15.267 "driver_specific": {} 00:10:15.267 } 00:10:15.267 ] 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.267 "name": "Existed_Raid", 00:10:15.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.267 "strip_size_kb": 64, 00:10:15.267 "state": "configuring", 00:10:15.267 "raid_level": "concat", 00:10:15.267 "superblock": false, 00:10:15.267 "num_base_bdevs": 3, 00:10:15.267 "num_base_bdevs_discovered": 1, 00:10:15.267 "num_base_bdevs_operational": 3, 00:10:15.267 "base_bdevs_list": [ 00:10:15.267 { 00:10:15.267 "name": "BaseBdev1", 00:10:15.267 "uuid": "f0aa2127-77a0-4286-8f99-dfe953c6375a", 00:10:15.267 "is_configured": true, 00:10:15.267 "data_offset": 0, 00:10:15.267 "data_size": 65536 00:10:15.267 }, 00:10:15.267 { 00:10:15.267 "name": "BaseBdev2", 00:10:15.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.267 "is_configured": false, 00:10:15.267 "data_offset": 0, 00:10:15.267 "data_size": 0 00:10:15.267 }, 00:10:15.267 { 00:10:15.267 "name": "BaseBdev3", 00:10:15.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.267 "is_configured": false, 00:10:15.267 "data_offset": 0, 00:10:15.267 "data_size": 0 00:10:15.267 } 00:10:15.267 ] 00:10:15.267 }' 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.267 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.527 [2024-11-08 16:51:44.974512] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.527 [2024-11-08 16:51:44.974575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.527 [2024-11-08 16:51:44.982526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.527 [2024-11-08 16:51:44.984414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.527 [2024-11-08 16:51:44.984462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.527 [2024-11-08 16:51:44.984472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.527 [2024-11-08 16:51:44.984482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.527 16:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.527 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.527 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.527 "name": "Existed_Raid", 00:10:15.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.527 "strip_size_kb": 64, 00:10:15.527 "state": "configuring", 00:10:15.527 "raid_level": "concat", 00:10:15.527 "superblock": false, 00:10:15.527 "num_base_bdevs": 3, 00:10:15.527 "num_base_bdevs_discovered": 1, 00:10:15.527 "num_base_bdevs_operational": 3, 00:10:15.527 "base_bdevs_list": [ 00:10:15.527 { 00:10:15.527 "name": "BaseBdev1", 00:10:15.527 "uuid": "f0aa2127-77a0-4286-8f99-dfe953c6375a", 00:10:15.527 "is_configured": true, 00:10:15.527 "data_offset": 0, 00:10:15.527 "data_size": 65536 00:10:15.527 }, 00:10:15.527 { 00:10:15.527 "name": "BaseBdev2", 00:10:15.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.527 "is_configured": false, 00:10:15.527 "data_offset": 0, 00:10:15.527 "data_size": 0 00:10:15.527 }, 00:10:15.527 { 00:10:15.527 "name": "BaseBdev3", 00:10:15.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.527 "is_configured": false, 00:10:15.527 "data_offset": 0, 00:10:15.527 "data_size": 0 00:10:15.527 } 00:10:15.527 ] 00:10:15.527 }' 00:10:15.527 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.527 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.098 [2024-11-08 16:51:45.452118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.098 BaseBdev2 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.098 [ 00:10:16.098 { 00:10:16.098 "name": "BaseBdev2", 00:10:16.098 "aliases": [ 00:10:16.098 "c4898584-f9e7-4e19-89f1-14f0f3cc2a3d" 00:10:16.098 ], 00:10:16.098 "product_name": "Malloc disk", 00:10:16.098 "block_size": 512, 00:10:16.098 "num_blocks": 65536, 00:10:16.098 "uuid": "c4898584-f9e7-4e19-89f1-14f0f3cc2a3d", 00:10:16.098 "assigned_rate_limits": { 00:10:16.098 "rw_ios_per_sec": 0, 00:10:16.098 "rw_mbytes_per_sec": 0, 00:10:16.098 "r_mbytes_per_sec": 0, 00:10:16.098 "w_mbytes_per_sec": 0 00:10:16.098 }, 00:10:16.098 "claimed": true, 00:10:16.098 "claim_type": "exclusive_write", 00:10:16.098 "zoned": false, 00:10:16.098 "supported_io_types": { 00:10:16.098 "read": true, 00:10:16.098 "write": true, 00:10:16.098 "unmap": true, 00:10:16.098 "flush": true, 00:10:16.098 "reset": true, 00:10:16.098 "nvme_admin": false, 00:10:16.098 "nvme_io": false, 00:10:16.098 "nvme_io_md": false, 00:10:16.098 "write_zeroes": true, 00:10:16.098 "zcopy": true, 00:10:16.098 "get_zone_info": false, 00:10:16.098 "zone_management": false, 00:10:16.098 "zone_append": false, 00:10:16.098 "compare": false, 00:10:16.098 "compare_and_write": false, 00:10:16.098 "abort": true, 00:10:16.098 "seek_hole": false, 00:10:16.098 "seek_data": false, 00:10:16.098 "copy": true, 00:10:16.098 "nvme_iov_md": false 00:10:16.098 }, 00:10:16.098 "memory_domains": [ 00:10:16.098 { 00:10:16.098 "dma_device_id": "system", 00:10:16.098 "dma_device_type": 1 00:10:16.098 }, 00:10:16.098 { 00:10:16.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.098 "dma_device_type": 2 00:10:16.098 } 00:10:16.098 ], 00:10:16.098 "driver_specific": {} 00:10:16.098 } 00:10:16.098 ] 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.098 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.099 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.099 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.099 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.099 "name": "Existed_Raid", 00:10:16.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.099 "strip_size_kb": 64, 00:10:16.099 "state": "configuring", 00:10:16.099 "raid_level": "concat", 00:10:16.099 "superblock": false, 00:10:16.099 "num_base_bdevs": 3, 00:10:16.099 "num_base_bdevs_discovered": 2, 00:10:16.099 "num_base_bdevs_operational": 3, 00:10:16.099 "base_bdevs_list": [ 00:10:16.099 { 00:10:16.099 "name": "BaseBdev1", 00:10:16.099 "uuid": "f0aa2127-77a0-4286-8f99-dfe953c6375a", 00:10:16.099 "is_configured": true, 00:10:16.099 "data_offset": 0, 00:10:16.099 "data_size": 65536 00:10:16.099 }, 00:10:16.099 { 00:10:16.099 "name": "BaseBdev2", 00:10:16.099 "uuid": "c4898584-f9e7-4e19-89f1-14f0f3cc2a3d", 00:10:16.099 "is_configured": true, 00:10:16.099 "data_offset": 0, 00:10:16.099 "data_size": 65536 00:10:16.099 }, 00:10:16.099 { 00:10:16.099 "name": "BaseBdev3", 00:10:16.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.099 "is_configured": false, 00:10:16.099 "data_offset": 0, 00:10:16.099 "data_size": 0 00:10:16.099 } 00:10:16.099 ] 00:10:16.099 }' 00:10:16.099 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.099 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 [2024-11-08 16:51:45.922360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.667 [2024-11-08 16:51:45.922405] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:16.667 [2024-11-08 16:51:45.922417] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:16.667 [2024-11-08 16:51:45.922739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:16.667 [2024-11-08 16:51:45.922882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:16.667 [2024-11-08 16:51:45.922905] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:16.667 [2024-11-08 16:51:45.923125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.667 BaseBdev3 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.667 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.667 [ 00:10:16.667 { 00:10:16.667 "name": "BaseBdev3", 00:10:16.667 "aliases": [ 00:10:16.667 "eed738e5-b656-40a9-b384-4dd5176b016b" 00:10:16.667 ], 00:10:16.667 "product_name": "Malloc disk", 00:10:16.667 "block_size": 512, 00:10:16.667 "num_blocks": 65536, 00:10:16.667 "uuid": "eed738e5-b656-40a9-b384-4dd5176b016b", 00:10:16.667 "assigned_rate_limits": { 00:10:16.667 "rw_ios_per_sec": 0, 00:10:16.667 "rw_mbytes_per_sec": 0, 00:10:16.667 "r_mbytes_per_sec": 0, 00:10:16.667 "w_mbytes_per_sec": 0 00:10:16.667 }, 00:10:16.667 "claimed": true, 00:10:16.667 "claim_type": "exclusive_write", 00:10:16.667 "zoned": false, 00:10:16.667 "supported_io_types": { 00:10:16.667 "read": true, 00:10:16.667 "write": true, 00:10:16.667 "unmap": true, 00:10:16.667 "flush": true, 00:10:16.667 "reset": true, 00:10:16.667 "nvme_admin": false, 00:10:16.667 "nvme_io": false, 00:10:16.667 "nvme_io_md": false, 00:10:16.667 "write_zeroes": true, 00:10:16.667 "zcopy": true, 00:10:16.667 "get_zone_info": false, 00:10:16.667 "zone_management": false, 00:10:16.667 "zone_append": false, 00:10:16.667 "compare": false, 00:10:16.667 "compare_and_write": false, 00:10:16.667 "abort": true, 00:10:16.667 "seek_hole": false, 00:10:16.667 "seek_data": false, 00:10:16.667 "copy": true, 00:10:16.667 "nvme_iov_md": false 00:10:16.667 }, 00:10:16.667 "memory_domains": [ 00:10:16.667 { 00:10:16.667 "dma_device_id": "system", 00:10:16.667 "dma_device_type": 1 00:10:16.668 }, 00:10:16.668 { 00:10:16.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.668 "dma_device_type": 2 00:10:16.668 } 00:10:16.668 ], 00:10:16.668 "driver_specific": {} 00:10:16.668 } 00:10:16.668 ] 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 16:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.668 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.668 "name": "Existed_Raid", 00:10:16.668 "uuid": "48911ee0-4881-4f61-a716-183cb57ea0c7", 00:10:16.668 "strip_size_kb": 64, 00:10:16.668 "state": "online", 00:10:16.668 "raid_level": "concat", 00:10:16.668 "superblock": false, 00:10:16.668 "num_base_bdevs": 3, 00:10:16.668 "num_base_bdevs_discovered": 3, 00:10:16.668 "num_base_bdevs_operational": 3, 00:10:16.668 "base_bdevs_list": [ 00:10:16.668 { 00:10:16.668 "name": "BaseBdev1", 00:10:16.668 "uuid": "f0aa2127-77a0-4286-8f99-dfe953c6375a", 00:10:16.668 "is_configured": true, 00:10:16.668 "data_offset": 0, 00:10:16.668 "data_size": 65536 00:10:16.668 }, 00:10:16.668 { 00:10:16.668 "name": "BaseBdev2", 00:10:16.668 "uuid": "c4898584-f9e7-4e19-89f1-14f0f3cc2a3d", 00:10:16.668 "is_configured": true, 00:10:16.668 "data_offset": 0, 00:10:16.668 "data_size": 65536 00:10:16.668 }, 00:10:16.668 { 00:10:16.668 "name": "BaseBdev3", 00:10:16.668 "uuid": "eed738e5-b656-40a9-b384-4dd5176b016b", 00:10:16.668 "is_configured": true, 00:10:16.668 "data_offset": 0, 00:10:16.668 "data_size": 65536 00:10:16.668 } 00:10:16.668 ] 00:10:16.668 }' 00:10:16.668 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.668 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.928 [2024-11-08 16:51:46.358007] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.928 "name": "Existed_Raid", 00:10:16.928 "aliases": [ 00:10:16.928 "48911ee0-4881-4f61-a716-183cb57ea0c7" 00:10:16.928 ], 00:10:16.928 "product_name": "Raid Volume", 00:10:16.928 "block_size": 512, 00:10:16.928 "num_blocks": 196608, 00:10:16.928 "uuid": "48911ee0-4881-4f61-a716-183cb57ea0c7", 00:10:16.928 "assigned_rate_limits": { 00:10:16.928 "rw_ios_per_sec": 0, 00:10:16.928 "rw_mbytes_per_sec": 0, 00:10:16.928 "r_mbytes_per_sec": 0, 00:10:16.928 "w_mbytes_per_sec": 0 00:10:16.928 }, 00:10:16.928 "claimed": false, 00:10:16.928 "zoned": false, 00:10:16.928 "supported_io_types": { 00:10:16.928 "read": true, 00:10:16.928 "write": true, 00:10:16.928 "unmap": true, 00:10:16.928 "flush": true, 00:10:16.928 "reset": true, 00:10:16.928 "nvme_admin": false, 00:10:16.928 "nvme_io": false, 00:10:16.928 "nvme_io_md": false, 00:10:16.928 "write_zeroes": true, 00:10:16.928 "zcopy": false, 00:10:16.928 "get_zone_info": false, 00:10:16.928 "zone_management": false, 00:10:16.928 "zone_append": false, 00:10:16.928 "compare": false, 00:10:16.928 "compare_and_write": false, 00:10:16.928 "abort": false, 00:10:16.928 "seek_hole": false, 00:10:16.928 "seek_data": false, 00:10:16.928 "copy": false, 00:10:16.928 "nvme_iov_md": false 00:10:16.928 }, 00:10:16.928 "memory_domains": [ 00:10:16.928 { 00:10:16.928 "dma_device_id": "system", 00:10:16.928 "dma_device_type": 1 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.928 "dma_device_type": 2 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "dma_device_id": "system", 00:10:16.928 "dma_device_type": 1 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.928 "dma_device_type": 2 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "dma_device_id": "system", 00:10:16.928 "dma_device_type": 1 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.928 "dma_device_type": 2 00:10:16.928 } 00:10:16.928 ], 00:10:16.928 "driver_specific": { 00:10:16.928 "raid": { 00:10:16.928 "uuid": "48911ee0-4881-4f61-a716-183cb57ea0c7", 00:10:16.928 "strip_size_kb": 64, 00:10:16.928 "state": "online", 00:10:16.928 "raid_level": "concat", 00:10:16.928 "superblock": false, 00:10:16.928 "num_base_bdevs": 3, 00:10:16.928 "num_base_bdevs_discovered": 3, 00:10:16.928 "num_base_bdevs_operational": 3, 00:10:16.928 "base_bdevs_list": [ 00:10:16.928 { 00:10:16.928 "name": "BaseBdev1", 00:10:16.928 "uuid": "f0aa2127-77a0-4286-8f99-dfe953c6375a", 00:10:16.928 "is_configured": true, 00:10:16.928 "data_offset": 0, 00:10:16.928 "data_size": 65536 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "name": "BaseBdev2", 00:10:16.928 "uuid": "c4898584-f9e7-4e19-89f1-14f0f3cc2a3d", 00:10:16.928 "is_configured": true, 00:10:16.928 "data_offset": 0, 00:10:16.928 "data_size": 65536 00:10:16.928 }, 00:10:16.928 { 00:10:16.928 "name": "BaseBdev3", 00:10:16.928 "uuid": "eed738e5-b656-40a9-b384-4dd5176b016b", 00:10:16.928 "is_configured": true, 00:10:16.928 "data_offset": 0, 00:10:16.928 "data_size": 65536 00:10:16.928 } 00:10:16.928 ] 00:10:16.928 } 00:10:16.928 } 00:10:16.928 }' 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:16.928 BaseBdev2 00:10:16.928 BaseBdev3' 00:10:16.928 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.189 [2024-11-08 16:51:46.609352] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.189 [2024-11-08 16:51:46.609391] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.189 [2024-11-08 16:51:46.609494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.189 "name": "Existed_Raid", 00:10:17.189 "uuid": "48911ee0-4881-4f61-a716-183cb57ea0c7", 00:10:17.189 "strip_size_kb": 64, 00:10:17.189 "state": "offline", 00:10:17.189 "raid_level": "concat", 00:10:17.189 "superblock": false, 00:10:17.189 "num_base_bdevs": 3, 00:10:17.189 "num_base_bdevs_discovered": 2, 00:10:17.189 "num_base_bdevs_operational": 2, 00:10:17.189 "base_bdevs_list": [ 00:10:17.189 { 00:10:17.189 "name": null, 00:10:17.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.189 "is_configured": false, 00:10:17.189 "data_offset": 0, 00:10:17.189 "data_size": 65536 00:10:17.189 }, 00:10:17.189 { 00:10:17.189 "name": "BaseBdev2", 00:10:17.189 "uuid": "c4898584-f9e7-4e19-89f1-14f0f3cc2a3d", 00:10:17.189 "is_configured": true, 00:10:17.189 "data_offset": 0, 00:10:17.189 "data_size": 65536 00:10:17.189 }, 00:10:17.189 { 00:10:17.189 "name": "BaseBdev3", 00:10:17.189 "uuid": "eed738e5-b656-40a9-b384-4dd5176b016b", 00:10:17.189 "is_configured": true, 00:10:17.189 "data_offset": 0, 00:10:17.189 "data_size": 65536 00:10:17.189 } 00:10:17.189 ] 00:10:17.189 }' 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.189 16:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 [2024-11-08 16:51:47.132088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 [2024-11-08 16:51:47.203421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.759 [2024-11-08 16:51:47.203482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.759 BaseBdev2 00:10:17.759 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.760 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.760 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:17.760 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.760 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:17.760 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.019 [ 00:10:18.019 { 00:10:18.019 "name": "BaseBdev2", 00:10:18.019 "aliases": [ 00:10:18.019 "c0e12f6d-68ec-475b-876f-f7cbaad7404d" 00:10:18.019 ], 00:10:18.019 "product_name": "Malloc disk", 00:10:18.019 "block_size": 512, 00:10:18.019 "num_blocks": 65536, 00:10:18.019 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:18.019 "assigned_rate_limits": { 00:10:18.019 "rw_ios_per_sec": 0, 00:10:18.019 "rw_mbytes_per_sec": 0, 00:10:18.019 "r_mbytes_per_sec": 0, 00:10:18.019 "w_mbytes_per_sec": 0 00:10:18.019 }, 00:10:18.019 "claimed": false, 00:10:18.019 "zoned": false, 00:10:18.019 "supported_io_types": { 00:10:18.019 "read": true, 00:10:18.019 "write": true, 00:10:18.019 "unmap": true, 00:10:18.019 "flush": true, 00:10:18.019 "reset": true, 00:10:18.019 "nvme_admin": false, 00:10:18.019 "nvme_io": false, 00:10:18.019 "nvme_io_md": false, 00:10:18.019 "write_zeroes": true, 00:10:18.019 "zcopy": true, 00:10:18.019 "get_zone_info": false, 00:10:18.019 "zone_management": false, 00:10:18.019 "zone_append": false, 00:10:18.019 "compare": false, 00:10:18.019 "compare_and_write": false, 00:10:18.019 "abort": true, 00:10:18.019 "seek_hole": false, 00:10:18.019 "seek_data": false, 00:10:18.019 "copy": true, 00:10:18.019 "nvme_iov_md": false 00:10:18.019 }, 00:10:18.019 "memory_domains": [ 00:10:18.019 { 00:10:18.019 "dma_device_id": "system", 00:10:18.019 "dma_device_type": 1 00:10:18.019 }, 00:10:18.019 { 00:10:18.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.019 "dma_device_type": 2 00:10:18.019 } 00:10:18.019 ], 00:10:18.019 "driver_specific": {} 00:10:18.019 } 00:10:18.019 ] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.019 BaseBdev3 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.019 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.019 [ 00:10:18.019 { 00:10:18.019 "name": "BaseBdev3", 00:10:18.019 "aliases": [ 00:10:18.019 "b8003ade-1603-4f66-ad81-876a8bba6f82" 00:10:18.019 ], 00:10:18.019 "product_name": "Malloc disk", 00:10:18.019 "block_size": 512, 00:10:18.019 "num_blocks": 65536, 00:10:18.019 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:18.019 "assigned_rate_limits": { 00:10:18.019 "rw_ios_per_sec": 0, 00:10:18.019 "rw_mbytes_per_sec": 0, 00:10:18.019 "r_mbytes_per_sec": 0, 00:10:18.019 "w_mbytes_per_sec": 0 00:10:18.019 }, 00:10:18.019 "claimed": false, 00:10:18.019 "zoned": false, 00:10:18.019 "supported_io_types": { 00:10:18.019 "read": true, 00:10:18.019 "write": true, 00:10:18.019 "unmap": true, 00:10:18.019 "flush": true, 00:10:18.019 "reset": true, 00:10:18.019 "nvme_admin": false, 00:10:18.019 "nvme_io": false, 00:10:18.019 "nvme_io_md": false, 00:10:18.019 "write_zeroes": true, 00:10:18.019 "zcopy": true, 00:10:18.019 "get_zone_info": false, 00:10:18.019 "zone_management": false, 00:10:18.019 "zone_append": false, 00:10:18.019 "compare": false, 00:10:18.019 "compare_and_write": false, 00:10:18.019 "abort": true, 00:10:18.019 "seek_hole": false, 00:10:18.019 "seek_data": false, 00:10:18.019 "copy": true, 00:10:18.019 "nvme_iov_md": false 00:10:18.019 }, 00:10:18.019 "memory_domains": [ 00:10:18.019 { 00:10:18.020 "dma_device_id": "system", 00:10:18.020 "dma_device_type": 1 00:10:18.020 }, 00:10:18.020 { 00:10:18.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.020 "dma_device_type": 2 00:10:18.020 } 00:10:18.020 ], 00:10:18.020 "driver_specific": {} 00:10:18.020 } 00:10:18.020 ] 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 [2024-11-08 16:51:47.379816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.020 [2024-11-08 16:51:47.379918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.020 [2024-11-08 16:51:47.379969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.020 [2024-11-08 16:51:47.381914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.020 "name": "Existed_Raid", 00:10:18.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.020 "strip_size_kb": 64, 00:10:18.020 "state": "configuring", 00:10:18.020 "raid_level": "concat", 00:10:18.020 "superblock": false, 00:10:18.020 "num_base_bdevs": 3, 00:10:18.020 "num_base_bdevs_discovered": 2, 00:10:18.020 "num_base_bdevs_operational": 3, 00:10:18.020 "base_bdevs_list": [ 00:10:18.020 { 00:10:18.020 "name": "BaseBdev1", 00:10:18.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.020 "is_configured": false, 00:10:18.020 "data_offset": 0, 00:10:18.020 "data_size": 0 00:10:18.020 }, 00:10:18.020 { 00:10:18.020 "name": "BaseBdev2", 00:10:18.020 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:18.020 "is_configured": true, 00:10:18.020 "data_offset": 0, 00:10:18.020 "data_size": 65536 00:10:18.020 }, 00:10:18.020 { 00:10:18.020 "name": "BaseBdev3", 00:10:18.020 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:18.020 "is_configured": true, 00:10:18.020 "data_offset": 0, 00:10:18.020 "data_size": 65536 00:10:18.020 } 00:10:18.020 ] 00:10:18.020 }' 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.020 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.589 [2024-11-08 16:51:47.839083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.589 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.590 "name": "Existed_Raid", 00:10:18.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.590 "strip_size_kb": 64, 00:10:18.590 "state": "configuring", 00:10:18.590 "raid_level": "concat", 00:10:18.590 "superblock": false, 00:10:18.590 "num_base_bdevs": 3, 00:10:18.590 "num_base_bdevs_discovered": 1, 00:10:18.590 "num_base_bdevs_operational": 3, 00:10:18.590 "base_bdevs_list": [ 00:10:18.590 { 00:10:18.590 "name": "BaseBdev1", 00:10:18.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.590 "is_configured": false, 00:10:18.590 "data_offset": 0, 00:10:18.590 "data_size": 0 00:10:18.590 }, 00:10:18.590 { 00:10:18.590 "name": null, 00:10:18.590 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:18.590 "is_configured": false, 00:10:18.590 "data_offset": 0, 00:10:18.590 "data_size": 65536 00:10:18.590 }, 00:10:18.590 { 00:10:18.590 "name": "BaseBdev3", 00:10:18.590 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:18.590 "is_configured": true, 00:10:18.590 "data_offset": 0, 00:10:18.590 "data_size": 65536 00:10:18.590 } 00:10:18.590 ] 00:10:18.590 }' 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.590 16:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 [2024-11-08 16:51:48.301269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.849 BaseBdev1 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.850 [ 00:10:18.850 { 00:10:18.850 "name": "BaseBdev1", 00:10:18.850 "aliases": [ 00:10:18.850 "5b772c6c-e424-4368-bc09-58334e2fdf2f" 00:10:18.850 ], 00:10:18.850 "product_name": "Malloc disk", 00:10:18.850 "block_size": 512, 00:10:18.850 "num_blocks": 65536, 00:10:18.850 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:18.850 "assigned_rate_limits": { 00:10:18.850 "rw_ios_per_sec": 0, 00:10:18.850 "rw_mbytes_per_sec": 0, 00:10:18.850 "r_mbytes_per_sec": 0, 00:10:18.850 "w_mbytes_per_sec": 0 00:10:18.850 }, 00:10:18.850 "claimed": true, 00:10:18.850 "claim_type": "exclusive_write", 00:10:18.850 "zoned": false, 00:10:18.850 "supported_io_types": { 00:10:18.850 "read": true, 00:10:18.850 "write": true, 00:10:18.850 "unmap": true, 00:10:18.850 "flush": true, 00:10:18.850 "reset": true, 00:10:18.850 "nvme_admin": false, 00:10:18.850 "nvme_io": false, 00:10:18.850 "nvme_io_md": false, 00:10:18.850 "write_zeroes": true, 00:10:18.850 "zcopy": true, 00:10:18.850 "get_zone_info": false, 00:10:18.850 "zone_management": false, 00:10:18.850 "zone_append": false, 00:10:18.850 "compare": false, 00:10:18.850 "compare_and_write": false, 00:10:18.850 "abort": true, 00:10:18.850 "seek_hole": false, 00:10:18.850 "seek_data": false, 00:10:18.850 "copy": true, 00:10:18.850 "nvme_iov_md": false 00:10:18.850 }, 00:10:18.850 "memory_domains": [ 00:10:18.850 { 00:10:18.850 "dma_device_id": "system", 00:10:18.850 "dma_device_type": 1 00:10:18.850 }, 00:10:18.850 { 00:10:18.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.850 "dma_device_type": 2 00:10:18.850 } 00:10:18.850 ], 00:10:18.850 "driver_specific": {} 00:10:18.850 } 00:10:18.850 ] 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.850 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.110 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.110 "name": "Existed_Raid", 00:10:19.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.110 "strip_size_kb": 64, 00:10:19.110 "state": "configuring", 00:10:19.110 "raid_level": "concat", 00:10:19.110 "superblock": false, 00:10:19.110 "num_base_bdevs": 3, 00:10:19.110 "num_base_bdevs_discovered": 2, 00:10:19.110 "num_base_bdevs_operational": 3, 00:10:19.110 "base_bdevs_list": [ 00:10:19.110 { 00:10:19.110 "name": "BaseBdev1", 00:10:19.110 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:19.110 "is_configured": true, 00:10:19.110 "data_offset": 0, 00:10:19.110 "data_size": 65536 00:10:19.110 }, 00:10:19.110 { 00:10:19.110 "name": null, 00:10:19.110 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:19.110 "is_configured": false, 00:10:19.110 "data_offset": 0, 00:10:19.110 "data_size": 65536 00:10:19.110 }, 00:10:19.110 { 00:10:19.110 "name": "BaseBdev3", 00:10:19.110 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:19.110 "is_configured": true, 00:10:19.110 "data_offset": 0, 00:10:19.110 "data_size": 65536 00:10:19.110 } 00:10:19.110 ] 00:10:19.110 }' 00:10:19.110 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.110 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.368 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.368 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 [2024-11-08 16:51:48.852427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.369 "name": "Existed_Raid", 00:10:19.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.369 "strip_size_kb": 64, 00:10:19.369 "state": "configuring", 00:10:19.369 "raid_level": "concat", 00:10:19.369 "superblock": false, 00:10:19.369 "num_base_bdevs": 3, 00:10:19.369 "num_base_bdevs_discovered": 1, 00:10:19.369 "num_base_bdevs_operational": 3, 00:10:19.369 "base_bdevs_list": [ 00:10:19.369 { 00:10:19.369 "name": "BaseBdev1", 00:10:19.369 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:19.369 "is_configured": true, 00:10:19.369 "data_offset": 0, 00:10:19.369 "data_size": 65536 00:10:19.369 }, 00:10:19.369 { 00:10:19.369 "name": null, 00:10:19.369 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:19.369 "is_configured": false, 00:10:19.369 "data_offset": 0, 00:10:19.369 "data_size": 65536 00:10:19.369 }, 00:10:19.369 { 00:10:19.369 "name": null, 00:10:19.369 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:19.369 "is_configured": false, 00:10:19.369 "data_offset": 0, 00:10:19.369 "data_size": 65536 00:10:19.369 } 00:10:19.369 ] 00:10:19.369 }' 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.369 16:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.936 [2024-11-08 16:51:49.343598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.936 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.937 "name": "Existed_Raid", 00:10:19.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.937 "strip_size_kb": 64, 00:10:19.937 "state": "configuring", 00:10:19.937 "raid_level": "concat", 00:10:19.937 "superblock": false, 00:10:19.937 "num_base_bdevs": 3, 00:10:19.937 "num_base_bdevs_discovered": 2, 00:10:19.937 "num_base_bdevs_operational": 3, 00:10:19.937 "base_bdevs_list": [ 00:10:19.937 { 00:10:19.937 "name": "BaseBdev1", 00:10:19.937 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:19.937 "is_configured": true, 00:10:19.937 "data_offset": 0, 00:10:19.937 "data_size": 65536 00:10:19.937 }, 00:10:19.937 { 00:10:19.937 "name": null, 00:10:19.937 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:19.937 "is_configured": false, 00:10:19.937 "data_offset": 0, 00:10:19.937 "data_size": 65536 00:10:19.937 }, 00:10:19.937 { 00:10:19.937 "name": "BaseBdev3", 00:10:19.937 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:19.937 "is_configured": true, 00:10:19.937 "data_offset": 0, 00:10:19.937 "data_size": 65536 00:10:19.937 } 00:10:19.937 ] 00:10:19.937 }' 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.937 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.503 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.503 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.503 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.503 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.504 [2024-11-08 16:51:49.770893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.504 "name": "Existed_Raid", 00:10:20.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.504 "strip_size_kb": 64, 00:10:20.504 "state": "configuring", 00:10:20.504 "raid_level": "concat", 00:10:20.504 "superblock": false, 00:10:20.504 "num_base_bdevs": 3, 00:10:20.504 "num_base_bdevs_discovered": 1, 00:10:20.504 "num_base_bdevs_operational": 3, 00:10:20.504 "base_bdevs_list": [ 00:10:20.504 { 00:10:20.504 "name": null, 00:10:20.504 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:20.504 "is_configured": false, 00:10:20.504 "data_offset": 0, 00:10:20.504 "data_size": 65536 00:10:20.504 }, 00:10:20.504 { 00:10:20.504 "name": null, 00:10:20.504 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:20.504 "is_configured": false, 00:10:20.504 "data_offset": 0, 00:10:20.504 "data_size": 65536 00:10:20.504 }, 00:10:20.504 { 00:10:20.504 "name": "BaseBdev3", 00:10:20.504 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:20.504 "is_configured": true, 00:10:20.504 "data_offset": 0, 00:10:20.504 "data_size": 65536 00:10:20.504 } 00:10:20.504 ] 00:10:20.504 }' 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.504 16:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.764 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.764 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.764 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.764 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.764 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 [2024-11-08 16:51:50.316362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.023 "name": "Existed_Raid", 00:10:21.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.023 "strip_size_kb": 64, 00:10:21.023 "state": "configuring", 00:10:21.023 "raid_level": "concat", 00:10:21.023 "superblock": false, 00:10:21.023 "num_base_bdevs": 3, 00:10:21.023 "num_base_bdevs_discovered": 2, 00:10:21.023 "num_base_bdevs_operational": 3, 00:10:21.023 "base_bdevs_list": [ 00:10:21.023 { 00:10:21.023 "name": null, 00:10:21.023 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:21.023 "is_configured": false, 00:10:21.023 "data_offset": 0, 00:10:21.023 "data_size": 65536 00:10:21.023 }, 00:10:21.023 { 00:10:21.023 "name": "BaseBdev2", 00:10:21.023 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:21.023 "is_configured": true, 00:10:21.023 "data_offset": 0, 00:10:21.023 "data_size": 65536 00:10:21.023 }, 00:10:21.023 { 00:10:21.023 "name": "BaseBdev3", 00:10:21.023 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:21.023 "is_configured": true, 00:10:21.023 "data_offset": 0, 00:10:21.023 "data_size": 65536 00:10:21.023 } 00:10:21.023 ] 00:10:21.023 }' 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.023 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.283 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5b772c6c-e424-4368-bc09-58334e2fdf2f 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.543 [2024-11-08 16:51:50.846327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:21.543 [2024-11-08 16:51:50.846370] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:21.543 [2024-11-08 16:51:50.846380] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:21.543 [2024-11-08 16:51:50.846622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:21.543 [2024-11-08 16:51:50.846762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:21.543 [2024-11-08 16:51:50.846772] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:21.543 [2024-11-08 16:51:50.846955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.543 NewBaseBdev 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.543 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.544 [ 00:10:21.544 { 00:10:21.544 "name": "NewBaseBdev", 00:10:21.544 "aliases": [ 00:10:21.544 "5b772c6c-e424-4368-bc09-58334e2fdf2f" 00:10:21.544 ], 00:10:21.544 "product_name": "Malloc disk", 00:10:21.544 "block_size": 512, 00:10:21.544 "num_blocks": 65536, 00:10:21.544 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:21.544 "assigned_rate_limits": { 00:10:21.544 "rw_ios_per_sec": 0, 00:10:21.544 "rw_mbytes_per_sec": 0, 00:10:21.544 "r_mbytes_per_sec": 0, 00:10:21.544 "w_mbytes_per_sec": 0 00:10:21.544 }, 00:10:21.544 "claimed": true, 00:10:21.544 "claim_type": "exclusive_write", 00:10:21.544 "zoned": false, 00:10:21.544 "supported_io_types": { 00:10:21.544 "read": true, 00:10:21.544 "write": true, 00:10:21.544 "unmap": true, 00:10:21.544 "flush": true, 00:10:21.544 "reset": true, 00:10:21.544 "nvme_admin": false, 00:10:21.544 "nvme_io": false, 00:10:21.544 "nvme_io_md": false, 00:10:21.544 "write_zeroes": true, 00:10:21.544 "zcopy": true, 00:10:21.544 "get_zone_info": false, 00:10:21.544 "zone_management": false, 00:10:21.544 "zone_append": false, 00:10:21.544 "compare": false, 00:10:21.544 "compare_and_write": false, 00:10:21.544 "abort": true, 00:10:21.544 "seek_hole": false, 00:10:21.544 "seek_data": false, 00:10:21.544 "copy": true, 00:10:21.544 "nvme_iov_md": false 00:10:21.544 }, 00:10:21.544 "memory_domains": [ 00:10:21.544 { 00:10:21.544 "dma_device_id": "system", 00:10:21.544 "dma_device_type": 1 00:10:21.544 }, 00:10:21.544 { 00:10:21.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.544 "dma_device_type": 2 00:10:21.544 } 00:10:21.544 ], 00:10:21.544 "driver_specific": {} 00:10:21.544 } 00:10:21.544 ] 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.544 "name": "Existed_Raid", 00:10:21.544 "uuid": "15c12ab7-f355-45ea-aeb8-bbc6389dd820", 00:10:21.544 "strip_size_kb": 64, 00:10:21.544 "state": "online", 00:10:21.544 "raid_level": "concat", 00:10:21.544 "superblock": false, 00:10:21.544 "num_base_bdevs": 3, 00:10:21.544 "num_base_bdevs_discovered": 3, 00:10:21.544 "num_base_bdevs_operational": 3, 00:10:21.544 "base_bdevs_list": [ 00:10:21.544 { 00:10:21.544 "name": "NewBaseBdev", 00:10:21.544 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:21.544 "is_configured": true, 00:10:21.544 "data_offset": 0, 00:10:21.544 "data_size": 65536 00:10:21.544 }, 00:10:21.544 { 00:10:21.544 "name": "BaseBdev2", 00:10:21.544 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:21.544 "is_configured": true, 00:10:21.544 "data_offset": 0, 00:10:21.544 "data_size": 65536 00:10:21.544 }, 00:10:21.544 { 00:10:21.544 "name": "BaseBdev3", 00:10:21.544 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:21.544 "is_configured": true, 00:10:21.544 "data_offset": 0, 00:10:21.544 "data_size": 65536 00:10:21.544 } 00:10:21.544 ] 00:10:21.544 }' 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.544 16:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.803 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.803 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.803 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.803 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.803 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.803 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.063 [2024-11-08 16:51:51.333883] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.063 "name": "Existed_Raid", 00:10:22.063 "aliases": [ 00:10:22.063 "15c12ab7-f355-45ea-aeb8-bbc6389dd820" 00:10:22.063 ], 00:10:22.063 "product_name": "Raid Volume", 00:10:22.063 "block_size": 512, 00:10:22.063 "num_blocks": 196608, 00:10:22.063 "uuid": "15c12ab7-f355-45ea-aeb8-bbc6389dd820", 00:10:22.063 "assigned_rate_limits": { 00:10:22.063 "rw_ios_per_sec": 0, 00:10:22.063 "rw_mbytes_per_sec": 0, 00:10:22.063 "r_mbytes_per_sec": 0, 00:10:22.063 "w_mbytes_per_sec": 0 00:10:22.063 }, 00:10:22.063 "claimed": false, 00:10:22.063 "zoned": false, 00:10:22.063 "supported_io_types": { 00:10:22.063 "read": true, 00:10:22.063 "write": true, 00:10:22.063 "unmap": true, 00:10:22.063 "flush": true, 00:10:22.063 "reset": true, 00:10:22.063 "nvme_admin": false, 00:10:22.063 "nvme_io": false, 00:10:22.063 "nvme_io_md": false, 00:10:22.063 "write_zeroes": true, 00:10:22.063 "zcopy": false, 00:10:22.063 "get_zone_info": false, 00:10:22.063 "zone_management": false, 00:10:22.063 "zone_append": false, 00:10:22.063 "compare": false, 00:10:22.063 "compare_and_write": false, 00:10:22.063 "abort": false, 00:10:22.063 "seek_hole": false, 00:10:22.063 "seek_data": false, 00:10:22.063 "copy": false, 00:10:22.063 "nvme_iov_md": false 00:10:22.063 }, 00:10:22.063 "memory_domains": [ 00:10:22.063 { 00:10:22.063 "dma_device_id": "system", 00:10:22.063 "dma_device_type": 1 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.063 "dma_device_type": 2 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "dma_device_id": "system", 00:10:22.063 "dma_device_type": 1 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.063 "dma_device_type": 2 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "dma_device_id": "system", 00:10:22.063 "dma_device_type": 1 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.063 "dma_device_type": 2 00:10:22.063 } 00:10:22.063 ], 00:10:22.063 "driver_specific": { 00:10:22.063 "raid": { 00:10:22.063 "uuid": "15c12ab7-f355-45ea-aeb8-bbc6389dd820", 00:10:22.063 "strip_size_kb": 64, 00:10:22.063 "state": "online", 00:10:22.063 "raid_level": "concat", 00:10:22.063 "superblock": false, 00:10:22.063 "num_base_bdevs": 3, 00:10:22.063 "num_base_bdevs_discovered": 3, 00:10:22.063 "num_base_bdevs_operational": 3, 00:10:22.063 "base_bdevs_list": [ 00:10:22.063 { 00:10:22.063 "name": "NewBaseBdev", 00:10:22.063 "uuid": "5b772c6c-e424-4368-bc09-58334e2fdf2f", 00:10:22.063 "is_configured": true, 00:10:22.063 "data_offset": 0, 00:10:22.063 "data_size": 65536 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "name": "BaseBdev2", 00:10:22.063 "uuid": "c0e12f6d-68ec-475b-876f-f7cbaad7404d", 00:10:22.063 "is_configured": true, 00:10:22.063 "data_offset": 0, 00:10:22.063 "data_size": 65536 00:10:22.063 }, 00:10:22.063 { 00:10:22.063 "name": "BaseBdev3", 00:10:22.063 "uuid": "b8003ade-1603-4f66-ad81-876a8bba6f82", 00:10:22.063 "is_configured": true, 00:10:22.063 "data_offset": 0, 00:10:22.063 "data_size": 65536 00:10:22.063 } 00:10:22.063 ] 00:10:22.063 } 00:10:22.063 } 00:10:22.063 }' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.063 BaseBdev2 00:10:22.063 BaseBdev3' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.063 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.064 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.323 [2024-11-08 16:51:51.593091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.323 [2024-11-08 16:51:51.593120] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.323 [2024-11-08 16:51:51.593195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.323 [2024-11-08 16:51:51.593251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.323 [2024-11-08 16:51:51.593263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76742 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76742 ']' 00:10:22.323 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76742 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76742 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76742' 00:10:22.324 killing process with pid 76742 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76742 00:10:22.324 [2024-11-08 16:51:51.644356] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.324 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76742 00:10:22.324 [2024-11-08 16:51:51.674785] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.584 16:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:22.584 00:10:22.584 real 0m8.817s 00:10:22.584 user 0m15.091s 00:10:22.584 sys 0m1.801s 00:10:22.584 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.584 16:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.584 ************************************ 00:10:22.584 END TEST raid_state_function_test 00:10:22.584 ************************************ 00:10:22.584 16:51:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:22.584 16:51:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:22.584 16:51:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.585 16:51:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 ************************************ 00:10:22.585 START TEST raid_state_function_test_sb 00:10:22.585 ************************************ 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:22.585 16:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77346 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77346' 00:10:22.585 Process raid pid: 77346 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77346 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77346 ']' 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.585 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.585 [2024-11-08 16:51:52.089097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:22.585 [2024-11-08 16:51:52.089313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.845 [2024-11-08 16:51:52.235943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.845 [2024-11-08 16:51:52.283781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.845 [2024-11-08 16:51:52.326583] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.845 [2024-11-08 16:51:52.326744] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.414 [2024-11-08 16:51:52.928131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.414 [2024-11-08 16:51:52.928186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.414 [2024-11-08 16:51:52.928208] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.414 [2024-11-08 16:51:52.928219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.414 [2024-11-08 16:51:52.928225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.414 [2024-11-08 16:51:52.928238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.414 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.673 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.673 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.673 "name": "Existed_Raid", 00:10:23.673 "uuid": "636de12d-b023-4a80-8578-58afc6bde2da", 00:10:23.673 "strip_size_kb": 64, 00:10:23.673 "state": "configuring", 00:10:23.673 "raid_level": "concat", 00:10:23.673 "superblock": true, 00:10:23.673 "num_base_bdevs": 3, 00:10:23.673 "num_base_bdevs_discovered": 0, 00:10:23.673 "num_base_bdevs_operational": 3, 00:10:23.673 "base_bdevs_list": [ 00:10:23.673 { 00:10:23.673 "name": "BaseBdev1", 00:10:23.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.673 "is_configured": false, 00:10:23.673 "data_offset": 0, 00:10:23.673 "data_size": 0 00:10:23.673 }, 00:10:23.673 { 00:10:23.673 "name": "BaseBdev2", 00:10:23.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.673 "is_configured": false, 00:10:23.673 "data_offset": 0, 00:10:23.673 "data_size": 0 00:10:23.673 }, 00:10:23.673 { 00:10:23.673 "name": "BaseBdev3", 00:10:23.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.673 "is_configured": false, 00:10:23.673 "data_offset": 0, 00:10:23.673 "data_size": 0 00:10:23.673 } 00:10:23.673 ] 00:10:23.673 }' 00:10:23.673 16:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.673 16:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 [2024-11-08 16:51:53.339335] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.934 [2024-11-08 16:51:53.339446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 [2024-11-08 16:51:53.347356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.934 [2024-11-08 16:51:53.347455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.934 [2024-11-08 16:51:53.347483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.934 [2024-11-08 16:51:53.347506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.934 [2024-11-08 16:51:53.347525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.934 [2024-11-08 16:51:53.347547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 [2024-11-08 16:51:53.364071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.934 BaseBdev1 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 [ 00:10:23.934 { 00:10:23.934 "name": "BaseBdev1", 00:10:23.934 "aliases": [ 00:10:23.934 "fb0955f4-f954-4b12-977a-327caf68127f" 00:10:23.934 ], 00:10:23.934 "product_name": "Malloc disk", 00:10:23.934 "block_size": 512, 00:10:23.934 "num_blocks": 65536, 00:10:23.934 "uuid": "fb0955f4-f954-4b12-977a-327caf68127f", 00:10:23.934 "assigned_rate_limits": { 00:10:23.934 "rw_ios_per_sec": 0, 00:10:23.934 "rw_mbytes_per_sec": 0, 00:10:23.934 "r_mbytes_per_sec": 0, 00:10:23.934 "w_mbytes_per_sec": 0 00:10:23.934 }, 00:10:23.934 "claimed": true, 00:10:23.934 "claim_type": "exclusive_write", 00:10:23.934 "zoned": false, 00:10:23.934 "supported_io_types": { 00:10:23.934 "read": true, 00:10:23.934 "write": true, 00:10:23.934 "unmap": true, 00:10:23.934 "flush": true, 00:10:23.934 "reset": true, 00:10:23.934 "nvme_admin": false, 00:10:23.934 "nvme_io": false, 00:10:23.934 "nvme_io_md": false, 00:10:23.934 "write_zeroes": true, 00:10:23.934 "zcopy": true, 00:10:23.934 "get_zone_info": false, 00:10:23.934 "zone_management": false, 00:10:23.934 "zone_append": false, 00:10:23.934 "compare": false, 00:10:23.934 "compare_and_write": false, 00:10:23.934 "abort": true, 00:10:23.934 "seek_hole": false, 00:10:23.934 "seek_data": false, 00:10:23.934 "copy": true, 00:10:23.934 "nvme_iov_md": false 00:10:23.934 }, 00:10:23.934 "memory_domains": [ 00:10:23.934 { 00:10:23.934 "dma_device_id": "system", 00:10:23.934 "dma_device_type": 1 00:10:23.934 }, 00:10:23.934 { 00:10:23.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.934 "dma_device_type": 2 00:10:23.934 } 00:10:23.934 ], 00:10:23.934 "driver_specific": {} 00:10:23.934 } 00:10:23.934 ] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.934 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.934 "name": "Existed_Raid", 00:10:23.934 "uuid": "1d2d0200-bf04-4f66-89e9-fd0d4b17c4c7", 00:10:23.934 "strip_size_kb": 64, 00:10:23.934 "state": "configuring", 00:10:23.934 "raid_level": "concat", 00:10:23.934 "superblock": true, 00:10:23.934 "num_base_bdevs": 3, 00:10:23.934 "num_base_bdevs_discovered": 1, 00:10:23.934 "num_base_bdevs_operational": 3, 00:10:23.934 "base_bdevs_list": [ 00:10:23.934 { 00:10:23.934 "name": "BaseBdev1", 00:10:23.934 "uuid": "fb0955f4-f954-4b12-977a-327caf68127f", 00:10:23.934 "is_configured": true, 00:10:23.934 "data_offset": 2048, 00:10:23.934 "data_size": 63488 00:10:23.934 }, 00:10:23.934 { 00:10:23.934 "name": "BaseBdev2", 00:10:23.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.935 "is_configured": false, 00:10:23.935 "data_offset": 0, 00:10:23.935 "data_size": 0 00:10:23.935 }, 00:10:23.935 { 00:10:23.935 "name": "BaseBdev3", 00:10:23.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.935 "is_configured": false, 00:10:23.935 "data_offset": 0, 00:10:23.935 "data_size": 0 00:10:23.935 } 00:10:23.935 ] 00:10:23.935 }' 00:10:23.935 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.935 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.504 [2024-11-08 16:51:53.831340] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.504 [2024-11-08 16:51:53.831462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.504 [2024-11-08 16:51:53.843350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.504 [2024-11-08 16:51:53.845436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.504 [2024-11-08 16:51:53.845516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.504 [2024-11-08 16:51:53.845553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.504 [2024-11-08 16:51:53.845579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.504 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.505 "name": "Existed_Raid", 00:10:24.505 "uuid": "db9ba920-69f7-4ca1-bf61-0b078208c992", 00:10:24.505 "strip_size_kb": 64, 00:10:24.505 "state": "configuring", 00:10:24.505 "raid_level": "concat", 00:10:24.505 "superblock": true, 00:10:24.505 "num_base_bdevs": 3, 00:10:24.505 "num_base_bdevs_discovered": 1, 00:10:24.505 "num_base_bdevs_operational": 3, 00:10:24.505 "base_bdevs_list": [ 00:10:24.505 { 00:10:24.505 "name": "BaseBdev1", 00:10:24.505 "uuid": "fb0955f4-f954-4b12-977a-327caf68127f", 00:10:24.505 "is_configured": true, 00:10:24.505 "data_offset": 2048, 00:10:24.505 "data_size": 63488 00:10:24.505 }, 00:10:24.505 { 00:10:24.505 "name": "BaseBdev2", 00:10:24.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.505 "is_configured": false, 00:10:24.505 "data_offset": 0, 00:10:24.505 "data_size": 0 00:10:24.505 }, 00:10:24.505 { 00:10:24.505 "name": "BaseBdev3", 00:10:24.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.505 "is_configured": false, 00:10:24.505 "data_offset": 0, 00:10:24.505 "data_size": 0 00:10:24.505 } 00:10:24.505 ] 00:10:24.505 }' 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.505 16:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.090 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.090 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.090 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.090 [2024-11-08 16:51:54.313450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.090 BaseBdev2 00:10:25.090 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.090 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.091 [ 00:10:25.091 { 00:10:25.091 "name": "BaseBdev2", 00:10:25.091 "aliases": [ 00:10:25.091 "0ca46570-bf2d-4fdb-a3ae-b0417635ccba" 00:10:25.091 ], 00:10:25.091 "product_name": "Malloc disk", 00:10:25.091 "block_size": 512, 00:10:25.091 "num_blocks": 65536, 00:10:25.091 "uuid": "0ca46570-bf2d-4fdb-a3ae-b0417635ccba", 00:10:25.091 "assigned_rate_limits": { 00:10:25.091 "rw_ios_per_sec": 0, 00:10:25.091 "rw_mbytes_per_sec": 0, 00:10:25.091 "r_mbytes_per_sec": 0, 00:10:25.091 "w_mbytes_per_sec": 0 00:10:25.091 }, 00:10:25.091 "claimed": true, 00:10:25.091 "claim_type": "exclusive_write", 00:10:25.091 "zoned": false, 00:10:25.091 "supported_io_types": { 00:10:25.091 "read": true, 00:10:25.091 "write": true, 00:10:25.091 "unmap": true, 00:10:25.091 "flush": true, 00:10:25.091 "reset": true, 00:10:25.091 "nvme_admin": false, 00:10:25.091 "nvme_io": false, 00:10:25.091 "nvme_io_md": false, 00:10:25.091 "write_zeroes": true, 00:10:25.091 "zcopy": true, 00:10:25.091 "get_zone_info": false, 00:10:25.091 "zone_management": false, 00:10:25.091 "zone_append": false, 00:10:25.091 "compare": false, 00:10:25.091 "compare_and_write": false, 00:10:25.091 "abort": true, 00:10:25.091 "seek_hole": false, 00:10:25.091 "seek_data": false, 00:10:25.091 "copy": true, 00:10:25.091 "nvme_iov_md": false 00:10:25.091 }, 00:10:25.091 "memory_domains": [ 00:10:25.091 { 00:10:25.091 "dma_device_id": "system", 00:10:25.091 "dma_device_type": 1 00:10:25.091 }, 00:10:25.091 { 00:10:25.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.091 "dma_device_type": 2 00:10:25.091 } 00:10:25.091 ], 00:10:25.091 "driver_specific": {} 00:10:25.091 } 00:10:25.091 ] 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.091 "name": "Existed_Raid", 00:10:25.091 "uuid": "db9ba920-69f7-4ca1-bf61-0b078208c992", 00:10:25.091 "strip_size_kb": 64, 00:10:25.091 "state": "configuring", 00:10:25.091 "raid_level": "concat", 00:10:25.091 "superblock": true, 00:10:25.091 "num_base_bdevs": 3, 00:10:25.091 "num_base_bdevs_discovered": 2, 00:10:25.091 "num_base_bdevs_operational": 3, 00:10:25.091 "base_bdevs_list": [ 00:10:25.091 { 00:10:25.091 "name": "BaseBdev1", 00:10:25.091 "uuid": "fb0955f4-f954-4b12-977a-327caf68127f", 00:10:25.091 "is_configured": true, 00:10:25.091 "data_offset": 2048, 00:10:25.091 "data_size": 63488 00:10:25.091 }, 00:10:25.091 { 00:10:25.091 "name": "BaseBdev2", 00:10:25.091 "uuid": "0ca46570-bf2d-4fdb-a3ae-b0417635ccba", 00:10:25.091 "is_configured": true, 00:10:25.091 "data_offset": 2048, 00:10:25.091 "data_size": 63488 00:10:25.091 }, 00:10:25.091 { 00:10:25.091 "name": "BaseBdev3", 00:10:25.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.091 "is_configured": false, 00:10:25.091 "data_offset": 0, 00:10:25.091 "data_size": 0 00:10:25.091 } 00:10:25.091 ] 00:10:25.091 }' 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.091 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.364 BaseBdev3 00:10:25.364 [2024-11-08 16:51:54.791630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.364 [2024-11-08 16:51:54.791835] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:25.364 [2024-11-08 16:51:54.791862] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.364 [2024-11-08 16:51:54.792144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:25.364 [2024-11-08 16:51:54.792274] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:25.364 [2024-11-08 16:51:54.792284] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:25.364 [2024-11-08 16:51:54.792402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.364 [ 00:10:25.364 { 00:10:25.364 "name": "BaseBdev3", 00:10:25.364 "aliases": [ 00:10:25.364 "1207270f-090a-4482-a521-c967450793d9" 00:10:25.364 ], 00:10:25.364 "product_name": "Malloc disk", 00:10:25.364 "block_size": 512, 00:10:25.364 "num_blocks": 65536, 00:10:25.364 "uuid": "1207270f-090a-4482-a521-c967450793d9", 00:10:25.364 "assigned_rate_limits": { 00:10:25.364 "rw_ios_per_sec": 0, 00:10:25.364 "rw_mbytes_per_sec": 0, 00:10:25.364 "r_mbytes_per_sec": 0, 00:10:25.364 "w_mbytes_per_sec": 0 00:10:25.364 }, 00:10:25.364 "claimed": true, 00:10:25.364 "claim_type": "exclusive_write", 00:10:25.364 "zoned": false, 00:10:25.364 "supported_io_types": { 00:10:25.364 "read": true, 00:10:25.364 "write": true, 00:10:25.364 "unmap": true, 00:10:25.364 "flush": true, 00:10:25.364 "reset": true, 00:10:25.364 "nvme_admin": false, 00:10:25.364 "nvme_io": false, 00:10:25.364 "nvme_io_md": false, 00:10:25.364 "write_zeroes": true, 00:10:25.364 "zcopy": true, 00:10:25.364 "get_zone_info": false, 00:10:25.364 "zone_management": false, 00:10:25.364 "zone_append": false, 00:10:25.364 "compare": false, 00:10:25.364 "compare_and_write": false, 00:10:25.364 "abort": true, 00:10:25.364 "seek_hole": false, 00:10:25.364 "seek_data": false, 00:10:25.364 "copy": true, 00:10:25.364 "nvme_iov_md": false 00:10:25.364 }, 00:10:25.364 "memory_domains": [ 00:10:25.364 { 00:10:25.364 "dma_device_id": "system", 00:10:25.364 "dma_device_type": 1 00:10:25.364 }, 00:10:25.364 { 00:10:25.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.364 "dma_device_type": 2 00:10:25.364 } 00:10:25.364 ], 00:10:25.364 "driver_specific": {} 00:10:25.364 } 00:10:25.364 ] 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.364 "name": "Existed_Raid", 00:10:25.364 "uuid": "db9ba920-69f7-4ca1-bf61-0b078208c992", 00:10:25.364 "strip_size_kb": 64, 00:10:25.364 "state": "online", 00:10:25.364 "raid_level": "concat", 00:10:25.364 "superblock": true, 00:10:25.364 "num_base_bdevs": 3, 00:10:25.364 "num_base_bdevs_discovered": 3, 00:10:25.364 "num_base_bdevs_operational": 3, 00:10:25.364 "base_bdevs_list": [ 00:10:25.364 { 00:10:25.364 "name": "BaseBdev1", 00:10:25.364 "uuid": "fb0955f4-f954-4b12-977a-327caf68127f", 00:10:25.364 "is_configured": true, 00:10:25.364 "data_offset": 2048, 00:10:25.364 "data_size": 63488 00:10:25.364 }, 00:10:25.364 { 00:10:25.364 "name": "BaseBdev2", 00:10:25.364 "uuid": "0ca46570-bf2d-4fdb-a3ae-b0417635ccba", 00:10:25.364 "is_configured": true, 00:10:25.364 "data_offset": 2048, 00:10:25.364 "data_size": 63488 00:10:25.364 }, 00:10:25.364 { 00:10:25.364 "name": "BaseBdev3", 00:10:25.364 "uuid": "1207270f-090a-4482-a521-c967450793d9", 00:10:25.364 "is_configured": true, 00:10:25.364 "data_offset": 2048, 00:10:25.364 "data_size": 63488 00:10:25.364 } 00:10:25.364 ] 00:10:25.364 }' 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.364 16:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.935 [2024-11-08 16:51:55.275256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.935 "name": "Existed_Raid", 00:10:25.935 "aliases": [ 00:10:25.935 "db9ba920-69f7-4ca1-bf61-0b078208c992" 00:10:25.935 ], 00:10:25.935 "product_name": "Raid Volume", 00:10:25.935 "block_size": 512, 00:10:25.935 "num_blocks": 190464, 00:10:25.935 "uuid": "db9ba920-69f7-4ca1-bf61-0b078208c992", 00:10:25.935 "assigned_rate_limits": { 00:10:25.935 "rw_ios_per_sec": 0, 00:10:25.935 "rw_mbytes_per_sec": 0, 00:10:25.935 "r_mbytes_per_sec": 0, 00:10:25.935 "w_mbytes_per_sec": 0 00:10:25.935 }, 00:10:25.935 "claimed": false, 00:10:25.935 "zoned": false, 00:10:25.935 "supported_io_types": { 00:10:25.935 "read": true, 00:10:25.935 "write": true, 00:10:25.935 "unmap": true, 00:10:25.935 "flush": true, 00:10:25.935 "reset": true, 00:10:25.935 "nvme_admin": false, 00:10:25.935 "nvme_io": false, 00:10:25.935 "nvme_io_md": false, 00:10:25.935 "write_zeroes": true, 00:10:25.935 "zcopy": false, 00:10:25.935 "get_zone_info": false, 00:10:25.935 "zone_management": false, 00:10:25.935 "zone_append": false, 00:10:25.935 "compare": false, 00:10:25.935 "compare_and_write": false, 00:10:25.935 "abort": false, 00:10:25.935 "seek_hole": false, 00:10:25.935 "seek_data": false, 00:10:25.935 "copy": false, 00:10:25.935 "nvme_iov_md": false 00:10:25.935 }, 00:10:25.935 "memory_domains": [ 00:10:25.935 { 00:10:25.935 "dma_device_id": "system", 00:10:25.935 "dma_device_type": 1 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.935 "dma_device_type": 2 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "dma_device_id": "system", 00:10:25.935 "dma_device_type": 1 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.935 "dma_device_type": 2 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "dma_device_id": "system", 00:10:25.935 "dma_device_type": 1 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.935 "dma_device_type": 2 00:10:25.935 } 00:10:25.935 ], 00:10:25.935 "driver_specific": { 00:10:25.935 "raid": { 00:10:25.935 "uuid": "db9ba920-69f7-4ca1-bf61-0b078208c992", 00:10:25.935 "strip_size_kb": 64, 00:10:25.935 "state": "online", 00:10:25.935 "raid_level": "concat", 00:10:25.935 "superblock": true, 00:10:25.935 "num_base_bdevs": 3, 00:10:25.935 "num_base_bdevs_discovered": 3, 00:10:25.935 "num_base_bdevs_operational": 3, 00:10:25.935 "base_bdevs_list": [ 00:10:25.935 { 00:10:25.935 "name": "BaseBdev1", 00:10:25.935 "uuid": "fb0955f4-f954-4b12-977a-327caf68127f", 00:10:25.935 "is_configured": true, 00:10:25.935 "data_offset": 2048, 00:10:25.935 "data_size": 63488 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "name": "BaseBdev2", 00:10:25.935 "uuid": "0ca46570-bf2d-4fdb-a3ae-b0417635ccba", 00:10:25.935 "is_configured": true, 00:10:25.935 "data_offset": 2048, 00:10:25.935 "data_size": 63488 00:10:25.935 }, 00:10:25.935 { 00:10:25.935 "name": "BaseBdev3", 00:10:25.935 "uuid": "1207270f-090a-4482-a521-c967450793d9", 00:10:25.935 "is_configured": true, 00:10:25.935 "data_offset": 2048, 00:10:25.935 "data_size": 63488 00:10:25.935 } 00:10:25.935 ] 00:10:25.935 } 00:10:25.935 } 00:10:25.935 }' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.935 BaseBdev2 00:10:25.935 BaseBdev3' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.935 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.195 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.195 [2024-11-08 16:51:55.530489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:26.195 [2024-11-08 16:51:55.530566] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.196 [2024-11-08 16:51:55.530670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.196 "name": "Existed_Raid", 00:10:26.196 "uuid": "db9ba920-69f7-4ca1-bf61-0b078208c992", 00:10:26.196 "strip_size_kb": 64, 00:10:26.196 "state": "offline", 00:10:26.196 "raid_level": "concat", 00:10:26.196 "superblock": true, 00:10:26.196 "num_base_bdevs": 3, 00:10:26.196 "num_base_bdevs_discovered": 2, 00:10:26.196 "num_base_bdevs_operational": 2, 00:10:26.196 "base_bdevs_list": [ 00:10:26.196 { 00:10:26.196 "name": null, 00:10:26.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.196 "is_configured": false, 00:10:26.196 "data_offset": 0, 00:10:26.196 "data_size": 63488 00:10:26.196 }, 00:10:26.196 { 00:10:26.196 "name": "BaseBdev2", 00:10:26.196 "uuid": "0ca46570-bf2d-4fdb-a3ae-b0417635ccba", 00:10:26.196 "is_configured": true, 00:10:26.196 "data_offset": 2048, 00:10:26.196 "data_size": 63488 00:10:26.196 }, 00:10:26.196 { 00:10:26.196 "name": "BaseBdev3", 00:10:26.196 "uuid": "1207270f-090a-4482-a521-c967450793d9", 00:10:26.196 "is_configured": true, 00:10:26.196 "data_offset": 2048, 00:10:26.196 "data_size": 63488 00:10:26.196 } 00:10:26.196 ] 00:10:26.196 }' 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.196 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.455 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:26.455 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.715 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.715 16:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.715 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 [2024-11-08 16:51:56.029122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 [2024-11-08 16:51:56.100349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.715 [2024-11-08 16:51:56.100446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 BaseBdev2 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.715 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.715 [ 00:10:26.715 { 00:10:26.715 "name": "BaseBdev2", 00:10:26.715 "aliases": [ 00:10:26.715 "d60fe441-05d8-4b43-b5dd-271ea76cd655" 00:10:26.715 ], 00:10:26.715 "product_name": "Malloc disk", 00:10:26.715 "block_size": 512, 00:10:26.715 "num_blocks": 65536, 00:10:26.715 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:26.715 "assigned_rate_limits": { 00:10:26.715 "rw_ios_per_sec": 0, 00:10:26.715 "rw_mbytes_per_sec": 0, 00:10:26.715 "r_mbytes_per_sec": 0, 00:10:26.715 "w_mbytes_per_sec": 0 00:10:26.715 }, 00:10:26.715 "claimed": false, 00:10:26.715 "zoned": false, 00:10:26.715 "supported_io_types": { 00:10:26.715 "read": true, 00:10:26.715 "write": true, 00:10:26.715 "unmap": true, 00:10:26.715 "flush": true, 00:10:26.715 "reset": true, 00:10:26.715 "nvme_admin": false, 00:10:26.715 "nvme_io": false, 00:10:26.715 "nvme_io_md": false, 00:10:26.715 "write_zeroes": true, 00:10:26.715 "zcopy": true, 00:10:26.715 "get_zone_info": false, 00:10:26.715 "zone_management": false, 00:10:26.715 "zone_append": false, 00:10:26.715 "compare": false, 00:10:26.715 "compare_and_write": false, 00:10:26.715 "abort": true, 00:10:26.715 "seek_hole": false, 00:10:26.715 "seek_data": false, 00:10:26.715 "copy": true, 00:10:26.715 "nvme_iov_md": false 00:10:26.715 }, 00:10:26.715 "memory_domains": [ 00:10:26.716 { 00:10:26.716 "dma_device_id": "system", 00:10:26.716 "dma_device_type": 1 00:10:26.716 }, 00:10:26.716 { 00:10:26.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.716 "dma_device_type": 2 00:10:26.716 } 00:10:26.716 ], 00:10:26.716 "driver_specific": {} 00:10:26.716 } 00:10:26.716 ] 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.716 BaseBdev3 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.716 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.976 [ 00:10:26.976 { 00:10:26.976 "name": "BaseBdev3", 00:10:26.976 "aliases": [ 00:10:26.976 "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c" 00:10:26.976 ], 00:10:26.976 "product_name": "Malloc disk", 00:10:26.976 "block_size": 512, 00:10:26.976 "num_blocks": 65536, 00:10:26.976 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:26.976 "assigned_rate_limits": { 00:10:26.976 "rw_ios_per_sec": 0, 00:10:26.976 "rw_mbytes_per_sec": 0, 00:10:26.976 "r_mbytes_per_sec": 0, 00:10:26.976 "w_mbytes_per_sec": 0 00:10:26.976 }, 00:10:26.976 "claimed": false, 00:10:26.976 "zoned": false, 00:10:26.976 "supported_io_types": { 00:10:26.976 "read": true, 00:10:26.976 "write": true, 00:10:26.976 "unmap": true, 00:10:26.976 "flush": true, 00:10:26.976 "reset": true, 00:10:26.976 "nvme_admin": false, 00:10:26.976 "nvme_io": false, 00:10:26.976 "nvme_io_md": false, 00:10:26.976 "write_zeroes": true, 00:10:26.976 "zcopy": true, 00:10:26.976 "get_zone_info": false, 00:10:26.976 "zone_management": false, 00:10:26.976 "zone_append": false, 00:10:26.976 "compare": false, 00:10:26.976 "compare_and_write": false, 00:10:26.976 "abort": true, 00:10:26.976 "seek_hole": false, 00:10:26.976 "seek_data": false, 00:10:26.976 "copy": true, 00:10:26.976 "nvme_iov_md": false 00:10:26.976 }, 00:10:26.976 "memory_domains": [ 00:10:26.976 { 00:10:26.976 "dma_device_id": "system", 00:10:26.976 "dma_device_type": 1 00:10:26.976 }, 00:10:26.976 { 00:10:26.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.976 "dma_device_type": 2 00:10:26.976 } 00:10:26.976 ], 00:10:26.976 "driver_specific": {} 00:10:26.976 } 00:10:26.976 ] 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.976 [2024-11-08 16:51:56.260234] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.976 [2024-11-08 16:51:56.260333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.976 [2024-11-08 16:51:56.260370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.976 [2024-11-08 16:51:56.262169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.976 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.976 "name": "Existed_Raid", 00:10:26.976 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:26.977 "strip_size_kb": 64, 00:10:26.977 "state": "configuring", 00:10:26.977 "raid_level": "concat", 00:10:26.977 "superblock": true, 00:10:26.977 "num_base_bdevs": 3, 00:10:26.977 "num_base_bdevs_discovered": 2, 00:10:26.977 "num_base_bdevs_operational": 3, 00:10:26.977 "base_bdevs_list": [ 00:10:26.977 { 00:10:26.977 "name": "BaseBdev1", 00:10:26.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.977 "is_configured": false, 00:10:26.977 "data_offset": 0, 00:10:26.977 "data_size": 0 00:10:26.977 }, 00:10:26.977 { 00:10:26.977 "name": "BaseBdev2", 00:10:26.977 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:26.977 "is_configured": true, 00:10:26.977 "data_offset": 2048, 00:10:26.977 "data_size": 63488 00:10:26.977 }, 00:10:26.977 { 00:10:26.977 "name": "BaseBdev3", 00:10:26.977 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:26.977 "is_configured": true, 00:10:26.977 "data_offset": 2048, 00:10:26.977 "data_size": 63488 00:10:26.977 } 00:10:26.977 ] 00:10:26.977 }' 00:10:26.977 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.977 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.237 [2024-11-08 16:51:56.683569] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.237 "name": "Existed_Raid", 00:10:27.237 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:27.237 "strip_size_kb": 64, 00:10:27.237 "state": "configuring", 00:10:27.237 "raid_level": "concat", 00:10:27.237 "superblock": true, 00:10:27.237 "num_base_bdevs": 3, 00:10:27.237 "num_base_bdevs_discovered": 1, 00:10:27.237 "num_base_bdevs_operational": 3, 00:10:27.237 "base_bdevs_list": [ 00:10:27.237 { 00:10:27.237 "name": "BaseBdev1", 00:10:27.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.237 "is_configured": false, 00:10:27.237 "data_offset": 0, 00:10:27.237 "data_size": 0 00:10:27.237 }, 00:10:27.237 { 00:10:27.237 "name": null, 00:10:27.237 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:27.237 "is_configured": false, 00:10:27.237 "data_offset": 0, 00:10:27.237 "data_size": 63488 00:10:27.237 }, 00:10:27.237 { 00:10:27.237 "name": "BaseBdev3", 00:10:27.237 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:27.237 "is_configured": true, 00:10:27.237 "data_offset": 2048, 00:10:27.237 "data_size": 63488 00:10:27.237 } 00:10:27.237 ] 00:10:27.237 }' 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.237 16:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 [2024-11-08 16:51:57.209653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.806 BaseBdev1 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 [ 00:10:27.806 { 00:10:27.806 "name": "BaseBdev1", 00:10:27.806 "aliases": [ 00:10:27.806 "746c5cc7-3446-46cf-b916-c15edbd4701a" 00:10:27.806 ], 00:10:27.806 "product_name": "Malloc disk", 00:10:27.806 "block_size": 512, 00:10:27.806 "num_blocks": 65536, 00:10:27.806 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:27.806 "assigned_rate_limits": { 00:10:27.806 "rw_ios_per_sec": 0, 00:10:27.806 "rw_mbytes_per_sec": 0, 00:10:27.806 "r_mbytes_per_sec": 0, 00:10:27.806 "w_mbytes_per_sec": 0 00:10:27.806 }, 00:10:27.806 "claimed": true, 00:10:27.806 "claim_type": "exclusive_write", 00:10:27.806 "zoned": false, 00:10:27.806 "supported_io_types": { 00:10:27.806 "read": true, 00:10:27.806 "write": true, 00:10:27.806 "unmap": true, 00:10:27.806 "flush": true, 00:10:27.806 "reset": true, 00:10:27.806 "nvme_admin": false, 00:10:27.806 "nvme_io": false, 00:10:27.806 "nvme_io_md": false, 00:10:27.806 "write_zeroes": true, 00:10:27.806 "zcopy": true, 00:10:27.806 "get_zone_info": false, 00:10:27.806 "zone_management": false, 00:10:27.806 "zone_append": false, 00:10:27.806 "compare": false, 00:10:27.806 "compare_and_write": false, 00:10:27.806 "abort": true, 00:10:27.806 "seek_hole": false, 00:10:27.806 "seek_data": false, 00:10:27.806 "copy": true, 00:10:27.806 "nvme_iov_md": false 00:10:27.806 }, 00:10:27.806 "memory_domains": [ 00:10:27.806 { 00:10:27.806 "dma_device_id": "system", 00:10:27.806 "dma_device_type": 1 00:10:27.806 }, 00:10:27.806 { 00:10:27.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.806 "dma_device_type": 2 00:10:27.806 } 00:10:27.806 ], 00:10:27.806 "driver_specific": {} 00:10:27.806 } 00:10:27.806 ] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.806 "name": "Existed_Raid", 00:10:27.806 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:27.806 "strip_size_kb": 64, 00:10:27.806 "state": "configuring", 00:10:27.806 "raid_level": "concat", 00:10:27.806 "superblock": true, 00:10:27.806 "num_base_bdevs": 3, 00:10:27.806 "num_base_bdevs_discovered": 2, 00:10:27.806 "num_base_bdevs_operational": 3, 00:10:27.806 "base_bdevs_list": [ 00:10:27.806 { 00:10:27.806 "name": "BaseBdev1", 00:10:27.806 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:27.806 "is_configured": true, 00:10:27.806 "data_offset": 2048, 00:10:27.806 "data_size": 63488 00:10:27.806 }, 00:10:27.806 { 00:10:27.806 "name": null, 00:10:27.806 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:27.806 "is_configured": false, 00:10:27.806 "data_offset": 0, 00:10:27.806 "data_size": 63488 00:10:27.806 }, 00:10:27.806 { 00:10:27.806 "name": "BaseBdev3", 00:10:27.806 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:27.806 "is_configured": true, 00:10:27.806 "data_offset": 2048, 00:10:27.806 "data_size": 63488 00:10:27.806 } 00:10:27.806 ] 00:10:27.806 }' 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.806 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.376 [2024-11-08 16:51:57.740794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.376 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.377 "name": "Existed_Raid", 00:10:28.377 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:28.377 "strip_size_kb": 64, 00:10:28.377 "state": "configuring", 00:10:28.377 "raid_level": "concat", 00:10:28.377 "superblock": true, 00:10:28.377 "num_base_bdevs": 3, 00:10:28.377 "num_base_bdevs_discovered": 1, 00:10:28.377 "num_base_bdevs_operational": 3, 00:10:28.377 "base_bdevs_list": [ 00:10:28.377 { 00:10:28.377 "name": "BaseBdev1", 00:10:28.377 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:28.377 "is_configured": true, 00:10:28.377 "data_offset": 2048, 00:10:28.377 "data_size": 63488 00:10:28.377 }, 00:10:28.377 { 00:10:28.377 "name": null, 00:10:28.377 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:28.377 "is_configured": false, 00:10:28.377 "data_offset": 0, 00:10:28.377 "data_size": 63488 00:10:28.377 }, 00:10:28.377 { 00:10:28.377 "name": null, 00:10:28.377 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:28.377 "is_configured": false, 00:10:28.377 "data_offset": 0, 00:10:28.377 "data_size": 63488 00:10:28.377 } 00:10:28.377 ] 00:10:28.377 }' 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.377 16:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 [2024-11-08 16:51:58.236039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.946 "name": "Existed_Raid", 00:10:28.946 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:28.946 "strip_size_kb": 64, 00:10:28.946 "state": "configuring", 00:10:28.946 "raid_level": "concat", 00:10:28.946 "superblock": true, 00:10:28.946 "num_base_bdevs": 3, 00:10:28.946 "num_base_bdevs_discovered": 2, 00:10:28.946 "num_base_bdevs_operational": 3, 00:10:28.946 "base_bdevs_list": [ 00:10:28.946 { 00:10:28.946 "name": "BaseBdev1", 00:10:28.946 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:28.946 "is_configured": true, 00:10:28.946 "data_offset": 2048, 00:10:28.946 "data_size": 63488 00:10:28.946 }, 00:10:28.946 { 00:10:28.946 "name": null, 00:10:28.946 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:28.946 "is_configured": false, 00:10:28.946 "data_offset": 0, 00:10:28.946 "data_size": 63488 00:10:28.946 }, 00:10:28.946 { 00:10:28.946 "name": "BaseBdev3", 00:10:28.946 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:28.946 "is_configured": true, 00:10:28.946 "data_offset": 2048, 00:10:28.946 "data_size": 63488 00:10:28.946 } 00:10:28.946 ] 00:10:28.946 }' 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.946 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.206 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.206 [2024-11-08 16:51:58.731243] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.466 "name": "Existed_Raid", 00:10:29.466 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:29.466 "strip_size_kb": 64, 00:10:29.466 "state": "configuring", 00:10:29.466 "raid_level": "concat", 00:10:29.466 "superblock": true, 00:10:29.466 "num_base_bdevs": 3, 00:10:29.466 "num_base_bdevs_discovered": 1, 00:10:29.466 "num_base_bdevs_operational": 3, 00:10:29.466 "base_bdevs_list": [ 00:10:29.466 { 00:10:29.466 "name": null, 00:10:29.466 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:29.466 "is_configured": false, 00:10:29.466 "data_offset": 0, 00:10:29.466 "data_size": 63488 00:10:29.466 }, 00:10:29.466 { 00:10:29.466 "name": null, 00:10:29.466 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:29.466 "is_configured": false, 00:10:29.466 "data_offset": 0, 00:10:29.466 "data_size": 63488 00:10:29.466 }, 00:10:29.466 { 00:10:29.466 "name": "BaseBdev3", 00:10:29.466 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:29.466 "is_configured": true, 00:10:29.466 "data_offset": 2048, 00:10:29.466 "data_size": 63488 00:10:29.466 } 00:10:29.466 ] 00:10:29.466 }' 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.466 16:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.725 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.985 [2024-11-08 16:51:59.256824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.985 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.985 "name": "Existed_Raid", 00:10:29.985 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:29.985 "strip_size_kb": 64, 00:10:29.985 "state": "configuring", 00:10:29.985 "raid_level": "concat", 00:10:29.985 "superblock": true, 00:10:29.985 "num_base_bdevs": 3, 00:10:29.985 "num_base_bdevs_discovered": 2, 00:10:29.985 "num_base_bdevs_operational": 3, 00:10:29.985 "base_bdevs_list": [ 00:10:29.985 { 00:10:29.985 "name": null, 00:10:29.985 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:29.985 "is_configured": false, 00:10:29.985 "data_offset": 0, 00:10:29.985 "data_size": 63488 00:10:29.985 }, 00:10:29.985 { 00:10:29.985 "name": "BaseBdev2", 00:10:29.985 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:29.985 "is_configured": true, 00:10:29.985 "data_offset": 2048, 00:10:29.985 "data_size": 63488 00:10:29.985 }, 00:10:29.985 { 00:10:29.985 "name": "BaseBdev3", 00:10:29.985 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:29.986 "is_configured": true, 00:10:29.986 "data_offset": 2048, 00:10:29.986 "data_size": 63488 00:10:29.986 } 00:10:29.986 ] 00:10:29.986 }' 00:10:29.986 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.986 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 746c5cc7-3446-46cf-b916-c15edbd4701a 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.246 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 [2024-11-08 16:51:59.778798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:30.506 NewBaseBdev 00:10:30.506 [2024-11-08 16:51:59.779059] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:30.506 [2024-11-08 16:51:59.779080] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:30.506 [2024-11-08 16:51:59.779343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:30.506 [2024-11-08 16:51:59.779459] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:30.506 [2024-11-08 16:51:59.779468] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:30.506 [2024-11-08 16:51:59.779574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 [ 00:10:30.506 { 00:10:30.506 "name": "NewBaseBdev", 00:10:30.506 "aliases": [ 00:10:30.506 "746c5cc7-3446-46cf-b916-c15edbd4701a" 00:10:30.506 ], 00:10:30.506 "product_name": "Malloc disk", 00:10:30.506 "block_size": 512, 00:10:30.506 "num_blocks": 65536, 00:10:30.506 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:30.506 "assigned_rate_limits": { 00:10:30.506 "rw_ios_per_sec": 0, 00:10:30.506 "rw_mbytes_per_sec": 0, 00:10:30.506 "r_mbytes_per_sec": 0, 00:10:30.506 "w_mbytes_per_sec": 0 00:10:30.506 }, 00:10:30.506 "claimed": true, 00:10:30.506 "claim_type": "exclusive_write", 00:10:30.506 "zoned": false, 00:10:30.506 "supported_io_types": { 00:10:30.506 "read": true, 00:10:30.506 "write": true, 00:10:30.506 "unmap": true, 00:10:30.506 "flush": true, 00:10:30.506 "reset": true, 00:10:30.506 "nvme_admin": false, 00:10:30.506 "nvme_io": false, 00:10:30.506 "nvme_io_md": false, 00:10:30.506 "write_zeroes": true, 00:10:30.506 "zcopy": true, 00:10:30.506 "get_zone_info": false, 00:10:30.506 "zone_management": false, 00:10:30.506 "zone_append": false, 00:10:30.506 "compare": false, 00:10:30.506 "compare_and_write": false, 00:10:30.506 "abort": true, 00:10:30.506 "seek_hole": false, 00:10:30.506 "seek_data": false, 00:10:30.506 "copy": true, 00:10:30.506 "nvme_iov_md": false 00:10:30.506 }, 00:10:30.506 "memory_domains": [ 00:10:30.506 { 00:10:30.506 "dma_device_id": "system", 00:10:30.506 "dma_device_type": 1 00:10:30.506 }, 00:10:30.506 { 00:10:30.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.506 "dma_device_type": 2 00:10:30.506 } 00:10:30.506 ], 00:10:30.506 "driver_specific": {} 00:10:30.506 } 00:10:30.506 ] 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.506 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.506 "name": "Existed_Raid", 00:10:30.506 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:30.506 "strip_size_kb": 64, 00:10:30.506 "state": "online", 00:10:30.506 "raid_level": "concat", 00:10:30.506 "superblock": true, 00:10:30.506 "num_base_bdevs": 3, 00:10:30.506 "num_base_bdevs_discovered": 3, 00:10:30.506 "num_base_bdevs_operational": 3, 00:10:30.506 "base_bdevs_list": [ 00:10:30.506 { 00:10:30.506 "name": "NewBaseBdev", 00:10:30.506 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:30.506 "is_configured": true, 00:10:30.506 "data_offset": 2048, 00:10:30.506 "data_size": 63488 00:10:30.506 }, 00:10:30.506 { 00:10:30.506 "name": "BaseBdev2", 00:10:30.506 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:30.506 "is_configured": true, 00:10:30.506 "data_offset": 2048, 00:10:30.506 "data_size": 63488 00:10:30.506 }, 00:10:30.506 { 00:10:30.506 "name": "BaseBdev3", 00:10:30.506 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:30.506 "is_configured": true, 00:10:30.506 "data_offset": 2048, 00:10:30.506 "data_size": 63488 00:10:30.506 } 00:10:30.506 ] 00:10:30.507 }' 00:10:30.507 16:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.507 16:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.766 [2024-11-08 16:52:00.222396] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.766 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.766 "name": "Existed_Raid", 00:10:30.766 "aliases": [ 00:10:30.766 "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722" 00:10:30.766 ], 00:10:30.766 "product_name": "Raid Volume", 00:10:30.766 "block_size": 512, 00:10:30.766 "num_blocks": 190464, 00:10:30.766 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:30.766 "assigned_rate_limits": { 00:10:30.766 "rw_ios_per_sec": 0, 00:10:30.766 "rw_mbytes_per_sec": 0, 00:10:30.766 "r_mbytes_per_sec": 0, 00:10:30.766 "w_mbytes_per_sec": 0 00:10:30.766 }, 00:10:30.766 "claimed": false, 00:10:30.766 "zoned": false, 00:10:30.766 "supported_io_types": { 00:10:30.766 "read": true, 00:10:30.766 "write": true, 00:10:30.766 "unmap": true, 00:10:30.766 "flush": true, 00:10:30.766 "reset": true, 00:10:30.766 "nvme_admin": false, 00:10:30.766 "nvme_io": false, 00:10:30.766 "nvme_io_md": false, 00:10:30.766 "write_zeroes": true, 00:10:30.766 "zcopy": false, 00:10:30.766 "get_zone_info": false, 00:10:30.766 "zone_management": false, 00:10:30.766 "zone_append": false, 00:10:30.767 "compare": false, 00:10:30.767 "compare_and_write": false, 00:10:30.767 "abort": false, 00:10:30.767 "seek_hole": false, 00:10:30.767 "seek_data": false, 00:10:30.767 "copy": false, 00:10:30.767 "nvme_iov_md": false 00:10:30.767 }, 00:10:30.767 "memory_domains": [ 00:10:30.767 { 00:10:30.767 "dma_device_id": "system", 00:10:30.767 "dma_device_type": 1 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.767 "dma_device_type": 2 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "dma_device_id": "system", 00:10:30.767 "dma_device_type": 1 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.767 "dma_device_type": 2 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "dma_device_id": "system", 00:10:30.767 "dma_device_type": 1 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.767 "dma_device_type": 2 00:10:30.767 } 00:10:30.767 ], 00:10:30.767 "driver_specific": { 00:10:30.767 "raid": { 00:10:30.767 "uuid": "ad5802ab-e0ee-42a8-a2ba-667dd0eb2722", 00:10:30.767 "strip_size_kb": 64, 00:10:30.767 "state": "online", 00:10:30.767 "raid_level": "concat", 00:10:30.767 "superblock": true, 00:10:30.767 "num_base_bdevs": 3, 00:10:30.767 "num_base_bdevs_discovered": 3, 00:10:30.767 "num_base_bdevs_operational": 3, 00:10:30.767 "base_bdevs_list": [ 00:10:30.767 { 00:10:30.767 "name": "NewBaseBdev", 00:10:30.767 "uuid": "746c5cc7-3446-46cf-b916-c15edbd4701a", 00:10:30.767 "is_configured": true, 00:10:30.767 "data_offset": 2048, 00:10:30.767 "data_size": 63488 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "name": "BaseBdev2", 00:10:30.767 "uuid": "d60fe441-05d8-4b43-b5dd-271ea76cd655", 00:10:30.767 "is_configured": true, 00:10:30.767 "data_offset": 2048, 00:10:30.767 "data_size": 63488 00:10:30.767 }, 00:10:30.767 { 00:10:30.767 "name": "BaseBdev3", 00:10:30.767 "uuid": "c1e89c9f-ed20-405d-a256-e61dbb4cfb9c", 00:10:30.767 "is_configured": true, 00:10:30.767 "data_offset": 2048, 00:10:30.767 "data_size": 63488 00:10:30.767 } 00:10:30.767 ] 00:10:30.767 } 00:10:30.767 } 00:10:30.767 }' 00:10:30.767 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:31.027 BaseBdev2 00:10:31.027 BaseBdev3' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.027 [2024-11-08 16:52:00.513606] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.027 [2024-11-08 16:52:00.513700] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.027 [2024-11-08 16:52:00.513802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.027 [2024-11-08 16:52:00.513874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.027 [2024-11-08 16:52:00.513915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77346 00:10:31.027 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77346 ']' 00:10:31.028 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77346 00:10:31.028 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:31.028 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.028 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77346 00:10:31.288 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.288 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.288 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77346' 00:10:31.288 killing process with pid 77346 00:10:31.288 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77346 00:10:31.288 [2024-11-08 16:52:00.561066] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.288 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77346 00:10:31.288 [2024-11-08 16:52:00.592707] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.546 16:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:31.546 00:10:31.546 real 0m8.849s 00:10:31.546 user 0m15.146s 00:10:31.546 sys 0m1.774s 00:10:31.546 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.546 16:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.546 ************************************ 00:10:31.546 END TEST raid_state_function_test_sb 00:10:31.546 ************************************ 00:10:31.546 16:52:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:31.546 16:52:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:31.546 16:52:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.546 16:52:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.546 ************************************ 00:10:31.546 START TEST raid_superblock_test 00:10:31.546 ************************************ 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77950 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:31.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77950 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77950 ']' 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.547 16:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.547 [2024-11-08 16:52:00.997573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:31.547 [2024-11-08 16:52:00.997840] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77950 ] 00:10:31.806 [2024-11-08 16:52:01.155413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.806 [2024-11-08 16:52:01.205088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.806 [2024-11-08 16:52:01.247412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.806 [2024-11-08 16:52:01.247542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.375 malloc1 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.375 [2024-11-08 16:52:01.858473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:32.375 [2024-11-08 16:52:01.858546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.375 [2024-11-08 16:52:01.858569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:32.375 [2024-11-08 16:52:01.858583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.375 [2024-11-08 16:52:01.860894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.375 [2024-11-08 16:52:01.860934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:32.375 pt1 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.375 malloc2 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.375 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.375 [2024-11-08 16:52:01.897492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.375 [2024-11-08 16:52:01.897615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.375 [2024-11-08 16:52:01.897678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:32.375 [2024-11-08 16:52:01.897751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.375 [2024-11-08 16:52:01.900515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.375 [2024-11-08 16:52:01.900626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.636 pt2 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.636 malloc3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.636 [2024-11-08 16:52:01.930287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:32.636 [2024-11-08 16:52:01.930416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.636 [2024-11-08 16:52:01.930456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:32.636 [2024-11-08 16:52:01.930487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.636 [2024-11-08 16:52:01.932656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.636 [2024-11-08 16:52:01.932746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:32.636 pt3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.636 [2024-11-08 16:52:01.942290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:32.636 [2024-11-08 16:52:01.944207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.636 [2024-11-08 16:52:01.944315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:32.636 [2024-11-08 16:52:01.944476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:32.636 [2024-11-08 16:52:01.944522] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:32.636 [2024-11-08 16:52:01.944811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:32.636 [2024-11-08 16:52:01.944973] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:32.636 [2024-11-08 16:52:01.945019] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:32.636 [2024-11-08 16:52:01.945175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.636 "name": "raid_bdev1", 00:10:32.636 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:32.636 "strip_size_kb": 64, 00:10:32.636 "state": "online", 00:10:32.636 "raid_level": "concat", 00:10:32.636 "superblock": true, 00:10:32.636 "num_base_bdevs": 3, 00:10:32.636 "num_base_bdevs_discovered": 3, 00:10:32.636 "num_base_bdevs_operational": 3, 00:10:32.636 "base_bdevs_list": [ 00:10:32.636 { 00:10:32.636 "name": "pt1", 00:10:32.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.636 "is_configured": true, 00:10:32.636 "data_offset": 2048, 00:10:32.636 "data_size": 63488 00:10:32.636 }, 00:10:32.636 { 00:10:32.636 "name": "pt2", 00:10:32.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.636 "is_configured": true, 00:10:32.636 "data_offset": 2048, 00:10:32.636 "data_size": 63488 00:10:32.636 }, 00:10:32.636 { 00:10:32.636 "name": "pt3", 00:10:32.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.636 "is_configured": true, 00:10:32.636 "data_offset": 2048, 00:10:32.636 "data_size": 63488 00:10:32.636 } 00:10:32.636 ] 00:10:32.636 }' 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.636 16:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.896 [2024-11-08 16:52:02.401823] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.896 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.156 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.156 "name": "raid_bdev1", 00:10:33.156 "aliases": [ 00:10:33.156 "2dbda818-3be3-45c0-8214-bf98a031a491" 00:10:33.156 ], 00:10:33.156 "product_name": "Raid Volume", 00:10:33.156 "block_size": 512, 00:10:33.156 "num_blocks": 190464, 00:10:33.156 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:33.156 "assigned_rate_limits": { 00:10:33.156 "rw_ios_per_sec": 0, 00:10:33.156 "rw_mbytes_per_sec": 0, 00:10:33.156 "r_mbytes_per_sec": 0, 00:10:33.156 "w_mbytes_per_sec": 0 00:10:33.156 }, 00:10:33.156 "claimed": false, 00:10:33.156 "zoned": false, 00:10:33.156 "supported_io_types": { 00:10:33.156 "read": true, 00:10:33.156 "write": true, 00:10:33.156 "unmap": true, 00:10:33.156 "flush": true, 00:10:33.156 "reset": true, 00:10:33.156 "nvme_admin": false, 00:10:33.156 "nvme_io": false, 00:10:33.156 "nvme_io_md": false, 00:10:33.156 "write_zeroes": true, 00:10:33.156 "zcopy": false, 00:10:33.156 "get_zone_info": false, 00:10:33.156 "zone_management": false, 00:10:33.156 "zone_append": false, 00:10:33.156 "compare": false, 00:10:33.156 "compare_and_write": false, 00:10:33.156 "abort": false, 00:10:33.156 "seek_hole": false, 00:10:33.156 "seek_data": false, 00:10:33.156 "copy": false, 00:10:33.156 "nvme_iov_md": false 00:10:33.156 }, 00:10:33.156 "memory_domains": [ 00:10:33.156 { 00:10:33.156 "dma_device_id": "system", 00:10:33.156 "dma_device_type": 1 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.156 "dma_device_type": 2 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "dma_device_id": "system", 00:10:33.156 "dma_device_type": 1 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.156 "dma_device_type": 2 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "dma_device_id": "system", 00:10:33.156 "dma_device_type": 1 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.156 "dma_device_type": 2 00:10:33.156 } 00:10:33.156 ], 00:10:33.156 "driver_specific": { 00:10:33.156 "raid": { 00:10:33.156 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:33.156 "strip_size_kb": 64, 00:10:33.156 "state": "online", 00:10:33.156 "raid_level": "concat", 00:10:33.156 "superblock": true, 00:10:33.156 "num_base_bdevs": 3, 00:10:33.156 "num_base_bdevs_discovered": 3, 00:10:33.156 "num_base_bdevs_operational": 3, 00:10:33.156 "base_bdevs_list": [ 00:10:33.156 { 00:10:33.156 "name": "pt1", 00:10:33.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.156 "is_configured": true, 00:10:33.156 "data_offset": 2048, 00:10:33.156 "data_size": 63488 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "name": "pt2", 00:10:33.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.156 "is_configured": true, 00:10:33.156 "data_offset": 2048, 00:10:33.156 "data_size": 63488 00:10:33.156 }, 00:10:33.156 { 00:10:33.156 "name": "pt3", 00:10:33.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.156 "is_configured": true, 00:10:33.156 "data_offset": 2048, 00:10:33.156 "data_size": 63488 00:10:33.156 } 00:10:33.156 ] 00:10:33.156 } 00:10:33.156 } 00:10:33.156 }' 00:10:33.156 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.156 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:33.156 pt2 00:10:33.156 pt3' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.157 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.157 [2024-11-08 16:52:02.673275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2dbda818-3be3-45c0-8214-bf98a031a491 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2dbda818-3be3-45c0-8214-bf98a031a491 ']' 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 [2024-11-08 16:52:02.720894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.417 [2024-11-08 16:52:02.720921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.417 [2024-11-08 16:52:02.721019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.417 [2024-11-08 16:52:02.721080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.417 [2024-11-08 16:52:02.721094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.417 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.417 [2024-11-08 16:52:02.872749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:33.417 [2024-11-08 16:52:02.874910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:33.417 [2024-11-08 16:52:02.874966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:33.417 [2024-11-08 16:52:02.875023] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:33.417 [2024-11-08 16:52:02.875076] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:33.417 [2024-11-08 16:52:02.875111] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:33.417 [2024-11-08 16:52:02.875127] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.417 [2024-11-08 16:52:02.875139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:33.417 request: 00:10:33.417 { 00:10:33.417 "name": "raid_bdev1", 00:10:33.417 "raid_level": "concat", 00:10:33.417 "base_bdevs": [ 00:10:33.417 "malloc1", 00:10:33.417 "malloc2", 00:10:33.417 "malloc3" 00:10:33.417 ], 00:10:33.417 "strip_size_kb": 64, 00:10:33.417 "superblock": false, 00:10:33.417 "method": "bdev_raid_create", 00:10:33.417 "req_id": 1 00:10:33.417 } 00:10:33.417 Got JSON-RPC error response 00:10:33.417 response: 00:10:33.417 { 00:10:33.417 "code": -17, 00:10:33.418 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:33.418 } 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.418 [2024-11-08 16:52:02.928575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.418 [2024-11-08 16:52:02.928734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.418 [2024-11-08 16:52:02.928774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:33.418 [2024-11-08 16:52:02.928839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.418 [2024-11-08 16:52:02.931081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.418 [2024-11-08 16:52:02.931170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.418 [2024-11-08 16:52:02.931278] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:33.418 [2024-11-08 16:52:02.931350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.418 pt1 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.418 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.677 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.677 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.677 "name": "raid_bdev1", 00:10:33.677 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:33.677 "strip_size_kb": 64, 00:10:33.677 "state": "configuring", 00:10:33.677 "raid_level": "concat", 00:10:33.677 "superblock": true, 00:10:33.677 "num_base_bdevs": 3, 00:10:33.677 "num_base_bdevs_discovered": 1, 00:10:33.677 "num_base_bdevs_operational": 3, 00:10:33.677 "base_bdevs_list": [ 00:10:33.677 { 00:10:33.677 "name": "pt1", 00:10:33.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.677 "is_configured": true, 00:10:33.677 "data_offset": 2048, 00:10:33.677 "data_size": 63488 00:10:33.677 }, 00:10:33.677 { 00:10:33.677 "name": null, 00:10:33.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.678 "is_configured": false, 00:10:33.678 "data_offset": 2048, 00:10:33.678 "data_size": 63488 00:10:33.678 }, 00:10:33.678 { 00:10:33.678 "name": null, 00:10:33.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.678 "is_configured": false, 00:10:33.678 "data_offset": 2048, 00:10:33.678 "data_size": 63488 00:10:33.678 } 00:10:33.678 ] 00:10:33.678 }' 00:10:33.678 16:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.678 16:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.937 [2024-11-08 16:52:03.355868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.937 [2024-11-08 16:52:03.355944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.937 [2024-11-08 16:52:03.355967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:33.937 [2024-11-08 16:52:03.355981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.937 [2024-11-08 16:52:03.356381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.937 [2024-11-08 16:52:03.356401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.937 [2024-11-08 16:52:03.356476] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.937 [2024-11-08 16:52:03.356500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.937 pt2 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.937 [2024-11-08 16:52:03.367854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.937 "name": "raid_bdev1", 00:10:33.937 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:33.937 "strip_size_kb": 64, 00:10:33.937 "state": "configuring", 00:10:33.937 "raid_level": "concat", 00:10:33.937 "superblock": true, 00:10:33.937 "num_base_bdevs": 3, 00:10:33.937 "num_base_bdevs_discovered": 1, 00:10:33.937 "num_base_bdevs_operational": 3, 00:10:33.937 "base_bdevs_list": [ 00:10:33.937 { 00:10:33.937 "name": "pt1", 00:10:33.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.937 "is_configured": true, 00:10:33.937 "data_offset": 2048, 00:10:33.937 "data_size": 63488 00:10:33.937 }, 00:10:33.937 { 00:10:33.937 "name": null, 00:10:33.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.937 "is_configured": false, 00:10:33.937 "data_offset": 0, 00:10:33.937 "data_size": 63488 00:10:33.937 }, 00:10:33.937 { 00:10:33.937 "name": null, 00:10:33.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.937 "is_configured": false, 00:10:33.937 "data_offset": 2048, 00:10:33.937 "data_size": 63488 00:10:33.937 } 00:10:33.937 ] 00:10:33.937 }' 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.937 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.506 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:34.506 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:34.506 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.506 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.506 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.506 [2024-11-08 16:52:03.831082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.507 [2024-11-08 16:52:03.831242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.507 [2024-11-08 16:52:03.831268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:34.507 [2024-11-08 16:52:03.831278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.507 [2024-11-08 16:52:03.831738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.507 [2024-11-08 16:52:03.831764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.507 [2024-11-08 16:52:03.831847] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:34.507 [2024-11-08 16:52:03.831872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.507 pt2 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.507 [2024-11-08 16:52:03.843000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:34.507 [2024-11-08 16:52:03.843050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.507 [2024-11-08 16:52:03.843070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:34.507 [2024-11-08 16:52:03.843078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.507 [2024-11-08 16:52:03.843447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.507 [2024-11-08 16:52:03.843469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:34.507 [2024-11-08 16:52:03.843541] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:34.507 [2024-11-08 16:52:03.843565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:34.507 [2024-11-08 16:52:03.843698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:34.507 [2024-11-08 16:52:03.843708] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:34.507 [2024-11-08 16:52:03.843948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:34.507 [2024-11-08 16:52:03.844076] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:34.507 [2024-11-08 16:52:03.844086] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:34.507 [2024-11-08 16:52:03.844186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.507 pt3 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.507 "name": "raid_bdev1", 00:10:34.507 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:34.507 "strip_size_kb": 64, 00:10:34.507 "state": "online", 00:10:34.507 "raid_level": "concat", 00:10:34.507 "superblock": true, 00:10:34.507 "num_base_bdevs": 3, 00:10:34.507 "num_base_bdevs_discovered": 3, 00:10:34.507 "num_base_bdevs_operational": 3, 00:10:34.507 "base_bdevs_list": [ 00:10:34.507 { 00:10:34.507 "name": "pt1", 00:10:34.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.507 "is_configured": true, 00:10:34.507 "data_offset": 2048, 00:10:34.507 "data_size": 63488 00:10:34.507 }, 00:10:34.507 { 00:10:34.507 "name": "pt2", 00:10:34.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.507 "is_configured": true, 00:10:34.507 "data_offset": 2048, 00:10:34.507 "data_size": 63488 00:10:34.507 }, 00:10:34.507 { 00:10:34.507 "name": "pt3", 00:10:34.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.507 "is_configured": true, 00:10:34.507 "data_offset": 2048, 00:10:34.507 "data_size": 63488 00:10:34.507 } 00:10:34.507 ] 00:10:34.507 }' 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.507 16:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.767 [2024-11-08 16:52:04.254622] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.767 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.767 "name": "raid_bdev1", 00:10:34.767 "aliases": [ 00:10:34.767 "2dbda818-3be3-45c0-8214-bf98a031a491" 00:10:34.767 ], 00:10:34.767 "product_name": "Raid Volume", 00:10:34.767 "block_size": 512, 00:10:34.767 "num_blocks": 190464, 00:10:34.767 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:34.767 "assigned_rate_limits": { 00:10:34.767 "rw_ios_per_sec": 0, 00:10:34.767 "rw_mbytes_per_sec": 0, 00:10:34.767 "r_mbytes_per_sec": 0, 00:10:34.767 "w_mbytes_per_sec": 0 00:10:34.767 }, 00:10:34.767 "claimed": false, 00:10:34.767 "zoned": false, 00:10:34.767 "supported_io_types": { 00:10:34.767 "read": true, 00:10:34.767 "write": true, 00:10:34.767 "unmap": true, 00:10:34.767 "flush": true, 00:10:34.767 "reset": true, 00:10:34.767 "nvme_admin": false, 00:10:34.767 "nvme_io": false, 00:10:34.767 "nvme_io_md": false, 00:10:34.767 "write_zeroes": true, 00:10:34.767 "zcopy": false, 00:10:34.767 "get_zone_info": false, 00:10:34.767 "zone_management": false, 00:10:34.767 "zone_append": false, 00:10:34.767 "compare": false, 00:10:34.767 "compare_and_write": false, 00:10:34.767 "abort": false, 00:10:34.767 "seek_hole": false, 00:10:34.767 "seek_data": false, 00:10:34.767 "copy": false, 00:10:34.767 "nvme_iov_md": false 00:10:34.767 }, 00:10:34.767 "memory_domains": [ 00:10:34.767 { 00:10:34.767 "dma_device_id": "system", 00:10:34.767 "dma_device_type": 1 00:10:34.767 }, 00:10:34.767 { 00:10:34.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.767 "dma_device_type": 2 00:10:34.767 }, 00:10:34.767 { 00:10:34.767 "dma_device_id": "system", 00:10:34.767 "dma_device_type": 1 00:10:34.767 }, 00:10:34.767 { 00:10:34.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.767 "dma_device_type": 2 00:10:34.767 }, 00:10:34.767 { 00:10:34.767 "dma_device_id": "system", 00:10:34.767 "dma_device_type": 1 00:10:34.767 }, 00:10:34.767 { 00:10:34.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.767 "dma_device_type": 2 00:10:34.767 } 00:10:34.767 ], 00:10:34.767 "driver_specific": { 00:10:34.767 "raid": { 00:10:34.767 "uuid": "2dbda818-3be3-45c0-8214-bf98a031a491", 00:10:34.767 "strip_size_kb": 64, 00:10:34.767 "state": "online", 00:10:34.767 "raid_level": "concat", 00:10:34.767 "superblock": true, 00:10:34.767 "num_base_bdevs": 3, 00:10:34.767 "num_base_bdevs_discovered": 3, 00:10:34.767 "num_base_bdevs_operational": 3, 00:10:34.767 "base_bdevs_list": [ 00:10:34.767 { 00:10:34.767 "name": "pt1", 00:10:34.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.768 "is_configured": true, 00:10:34.768 "data_offset": 2048, 00:10:34.768 "data_size": 63488 00:10:34.768 }, 00:10:34.768 { 00:10:34.768 "name": "pt2", 00:10:34.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.768 "is_configured": true, 00:10:34.768 "data_offset": 2048, 00:10:34.768 "data_size": 63488 00:10:34.768 }, 00:10:34.768 { 00:10:34.768 "name": "pt3", 00:10:34.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.768 "is_configured": true, 00:10:34.768 "data_offset": 2048, 00:10:34.768 "data_size": 63488 00:10:34.768 } 00:10:34.768 ] 00:10:34.768 } 00:10:34.768 } 00:10:34.768 }' 00:10:34.768 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:35.028 pt2 00:10:35.028 pt3' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.028 [2024-11-08 16:52:04.514142] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2dbda818-3be3-45c0-8214-bf98a031a491 '!=' 2dbda818-3be3-45c0-8214-bf98a031a491 ']' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77950 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77950 ']' 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77950 00:10:35.028 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77950 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77950' 00:10:35.303 killing process with pid 77950 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77950 00:10:35.303 [2024-11-08 16:52:04.591248] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.303 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77950 00:10:35.303 [2024-11-08 16:52:04.591414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.303 [2024-11-08 16:52:04.591506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.303 [2024-11-08 16:52:04.591542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:35.303 [2024-11-08 16:52:04.625203] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.579 16:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:35.579 00:10:35.579 real 0m3.960s 00:10:35.579 user 0m6.204s 00:10:35.579 sys 0m0.889s 00:10:35.579 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.579 ************************************ 00:10:35.579 END TEST raid_superblock_test 00:10:35.579 ************************************ 00:10:35.580 16:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.580 16:52:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:35.580 16:52:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:35.580 16:52:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.580 16:52:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.580 ************************************ 00:10:35.580 START TEST raid_read_error_test 00:10:35.580 ************************************ 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TslEQTeiUG 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78192 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78192 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78192 ']' 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.580 16:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.580 [2024-11-08 16:52:05.047394] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:35.580 [2024-11-08 16:52:05.047605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78192 ] 00:10:35.839 [2024-11-08 16:52:05.193844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.839 [2024-11-08 16:52:05.237860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.839 [2024-11-08 16:52:05.279742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.839 [2024-11-08 16:52:05.279856] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.409 BaseBdev1_malloc 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.409 true 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.409 [2024-11-08 16:52:05.905680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:36.409 [2024-11-08 16:52:05.905740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.409 [2024-11-08 16:52:05.905763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:36.409 [2024-11-08 16:52:05.905778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.409 [2024-11-08 16:52:05.907892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.409 [2024-11-08 16:52:05.907934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.409 BaseBdev1 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.409 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 BaseBdev2_malloc 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 true 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 [2024-11-08 16:52:05.956511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:36.669 [2024-11-08 16:52:05.956564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.669 [2024-11-08 16:52:05.956582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:36.669 [2024-11-08 16:52:05.956591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.669 [2024-11-08 16:52:05.958662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.669 [2024-11-08 16:52:05.958737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.669 BaseBdev2 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 BaseBdev3_malloc 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 true 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 [2024-11-08 16:52:05.996918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:36.669 [2024-11-08 16:52:05.997006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.669 [2024-11-08 16:52:05.997028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:36.669 [2024-11-08 16:52:05.997038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.669 [2024-11-08 16:52:05.999035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.669 [2024-11-08 16:52:05.999072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:36.669 BaseBdev3 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 [2024-11-08 16:52:06.008962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.669 [2024-11-08 16:52:06.010757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.669 [2024-11-08 16:52:06.010837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.669 [2024-11-08 16:52:06.011009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:36.669 [2024-11-08 16:52:06.011031] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:36.669 [2024-11-08 16:52:06.011273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:36.669 [2024-11-08 16:52:06.011421] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:36.669 [2024-11-08 16:52:06.011431] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:36.669 [2024-11-08 16:52:06.011550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.669 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.669 "name": "raid_bdev1", 00:10:36.669 "uuid": "b2154b5d-eb60-4f02-8c44-285c4b1415ee", 00:10:36.669 "strip_size_kb": 64, 00:10:36.669 "state": "online", 00:10:36.669 "raid_level": "concat", 00:10:36.669 "superblock": true, 00:10:36.669 "num_base_bdevs": 3, 00:10:36.669 "num_base_bdevs_discovered": 3, 00:10:36.669 "num_base_bdevs_operational": 3, 00:10:36.669 "base_bdevs_list": [ 00:10:36.669 { 00:10:36.669 "name": "BaseBdev1", 00:10:36.669 "uuid": "ebbed6d1-544d-5a1b-a968-65aa529775f1", 00:10:36.669 "is_configured": true, 00:10:36.669 "data_offset": 2048, 00:10:36.669 "data_size": 63488 00:10:36.669 }, 00:10:36.669 { 00:10:36.669 "name": "BaseBdev2", 00:10:36.669 "uuid": "23aa57d4-169a-544d-9c91-8527321df855", 00:10:36.669 "is_configured": true, 00:10:36.669 "data_offset": 2048, 00:10:36.669 "data_size": 63488 00:10:36.669 }, 00:10:36.669 { 00:10:36.669 "name": "BaseBdev3", 00:10:36.669 "uuid": "6b0ed6ea-1c7d-582f-9a56-b080493ca5f6", 00:10:36.670 "is_configured": true, 00:10:36.670 "data_offset": 2048, 00:10:36.670 "data_size": 63488 00:10:36.670 } 00:10:36.670 ] 00:10:36.670 }' 00:10:36.670 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.670 16:52:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.238 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:37.238 16:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:37.238 [2024-11-08 16:52:06.560397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.173 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.173 "name": "raid_bdev1", 00:10:38.173 "uuid": "b2154b5d-eb60-4f02-8c44-285c4b1415ee", 00:10:38.173 "strip_size_kb": 64, 00:10:38.173 "state": "online", 00:10:38.173 "raid_level": "concat", 00:10:38.174 "superblock": true, 00:10:38.174 "num_base_bdevs": 3, 00:10:38.174 "num_base_bdevs_discovered": 3, 00:10:38.174 "num_base_bdevs_operational": 3, 00:10:38.174 "base_bdevs_list": [ 00:10:38.174 { 00:10:38.174 "name": "BaseBdev1", 00:10:38.174 "uuid": "ebbed6d1-544d-5a1b-a968-65aa529775f1", 00:10:38.174 "is_configured": true, 00:10:38.174 "data_offset": 2048, 00:10:38.174 "data_size": 63488 00:10:38.174 }, 00:10:38.174 { 00:10:38.174 "name": "BaseBdev2", 00:10:38.174 "uuid": "23aa57d4-169a-544d-9c91-8527321df855", 00:10:38.174 "is_configured": true, 00:10:38.174 "data_offset": 2048, 00:10:38.174 "data_size": 63488 00:10:38.174 }, 00:10:38.174 { 00:10:38.174 "name": "BaseBdev3", 00:10:38.174 "uuid": "6b0ed6ea-1c7d-582f-9a56-b080493ca5f6", 00:10:38.174 "is_configured": true, 00:10:38.174 "data_offset": 2048, 00:10:38.174 "data_size": 63488 00:10:38.174 } 00:10:38.174 ] 00:10:38.174 }' 00:10:38.174 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.174 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.432 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.432 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.432 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.432 [2024-11-08 16:52:07.944077] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.432 [2024-11-08 16:52:07.944164] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.432 [2024-11-08 16:52:07.946717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.432 [2024-11-08 16:52:07.946805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.432 [2024-11-08 16:52:07.946858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.433 [2024-11-08 16:52:07.946901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:38.433 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.433 16:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78192 00:10:38.433 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78192 ']' 00:10:38.433 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78192 00:10:38.433 { 00:10:38.433 "results": [ 00:10:38.433 { 00:10:38.433 "job": "raid_bdev1", 00:10:38.433 "core_mask": "0x1", 00:10:38.433 "workload": "randrw", 00:10:38.433 "percentage": 50, 00:10:38.433 "status": "finished", 00:10:38.433 "queue_depth": 1, 00:10:38.433 "io_size": 131072, 00:10:38.433 "runtime": 1.384655, 00:10:38.433 "iops": 16756.520577327927, 00:10:38.433 "mibps": 2094.565072165991, 00:10:38.433 "io_failed": 1, 00:10:38.433 "io_timeout": 0, 00:10:38.433 "avg_latency_us": 82.7128315548716, 00:10:38.433 "min_latency_us": 25.7117903930131, 00:10:38.433 "max_latency_us": 1345.0620087336245 00:10:38.433 } 00:10:38.433 ], 00:10:38.433 "core_count": 1 00:10:38.433 } 00:10:38.433 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:38.433 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.691 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78192 00:10:38.691 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:38.691 killing process with pid 78192 00:10:38.691 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:38.691 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78192' 00:10:38.691 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78192 00:10:38.691 [2024-11-08 16:52:07.993391] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.691 16:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78192 00:10:38.691 [2024-11-08 16:52:08.019219] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TslEQTeiUG 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:38.950 00:10:38.950 real 0m3.321s 00:10:38.950 user 0m4.217s 00:10:38.950 sys 0m0.535s 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.950 ************************************ 00:10:38.950 END TEST raid_read_error_test 00:10:38.950 ************************************ 00:10:38.950 16:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.950 16:52:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:38.950 16:52:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:38.950 16:52:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.950 16:52:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.950 ************************************ 00:10:38.950 START TEST raid_write_error_test 00:10:38.950 ************************************ 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6XorbRXg6W 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78321 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78321 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78321 ']' 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.950 16:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.950 [2024-11-08 16:52:08.425284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:38.950 [2024-11-08 16:52:08.425517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78321 ] 00:10:39.208 [2024-11-08 16:52:08.584785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.208 [2024-11-08 16:52:08.631468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.208 [2024-11-08 16:52:08.673814] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.208 [2024-11-08 16:52:08.673852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.774 BaseBdev1_malloc 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.774 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 true 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 [2024-11-08 16:52:09.307979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.033 [2024-11-08 16:52:09.308087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.033 [2024-11-08 16:52:09.308121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:40.033 [2024-11-08 16:52:09.308147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.033 [2024-11-08 16:52:09.310387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.033 [2024-11-08 16:52:09.310430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.033 BaseBdev1 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 BaseBdev2_malloc 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 true 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 [2024-11-08 16:52:09.358577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.033 [2024-11-08 16:52:09.358692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.033 [2024-11-08 16:52:09.358715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.033 [2024-11-08 16:52:09.358724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.033 [2024-11-08 16:52:09.360839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.033 [2024-11-08 16:52:09.360875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.033 BaseBdev2 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 BaseBdev3_malloc 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 true 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.033 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.033 [2024-11-08 16:52:09.399170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:40.033 [2024-11-08 16:52:09.399216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.033 [2024-11-08 16:52:09.399235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:40.034 [2024-11-08 16:52:09.399244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.034 [2024-11-08 16:52:09.401302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.034 [2024-11-08 16:52:09.401338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:40.034 BaseBdev3 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.034 [2024-11-08 16:52:09.411223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.034 [2024-11-08 16:52:09.413053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.034 [2024-11-08 16:52:09.413134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.034 [2024-11-08 16:52:09.413307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:40.034 [2024-11-08 16:52:09.413322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:40.034 [2024-11-08 16:52:09.413563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:40.034 [2024-11-08 16:52:09.413707] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:40.034 [2024-11-08 16:52:09.413718] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:40.034 [2024-11-08 16:52:09.413835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.034 "name": "raid_bdev1", 00:10:40.034 "uuid": "2631fdbb-b9ff-494e-805b-eedd9b1ea6a6", 00:10:40.034 "strip_size_kb": 64, 00:10:40.034 "state": "online", 00:10:40.034 "raid_level": "concat", 00:10:40.034 "superblock": true, 00:10:40.034 "num_base_bdevs": 3, 00:10:40.034 "num_base_bdevs_discovered": 3, 00:10:40.034 "num_base_bdevs_operational": 3, 00:10:40.034 "base_bdevs_list": [ 00:10:40.034 { 00:10:40.034 "name": "BaseBdev1", 00:10:40.034 "uuid": "53a0d13b-4a95-51a2-b551-7e33eaeb452f", 00:10:40.034 "is_configured": true, 00:10:40.034 "data_offset": 2048, 00:10:40.034 "data_size": 63488 00:10:40.034 }, 00:10:40.034 { 00:10:40.034 "name": "BaseBdev2", 00:10:40.034 "uuid": "38901c38-d607-5a6c-acf5-42569d5c0ae7", 00:10:40.034 "is_configured": true, 00:10:40.034 "data_offset": 2048, 00:10:40.034 "data_size": 63488 00:10:40.034 }, 00:10:40.034 { 00:10:40.034 "name": "BaseBdev3", 00:10:40.034 "uuid": "0030c64c-7cde-5a37-9a88-4200010f0e36", 00:10:40.034 "is_configured": true, 00:10:40.034 "data_offset": 2048, 00:10:40.034 "data_size": 63488 00:10:40.034 } 00:10:40.034 ] 00:10:40.034 }' 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.034 16:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.600 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:40.600 16:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:40.600 [2024-11-08 16:52:09.934687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:41.537 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:41.537 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.538 "name": "raid_bdev1", 00:10:41.538 "uuid": "2631fdbb-b9ff-494e-805b-eedd9b1ea6a6", 00:10:41.538 "strip_size_kb": 64, 00:10:41.538 "state": "online", 00:10:41.538 "raid_level": "concat", 00:10:41.538 "superblock": true, 00:10:41.538 "num_base_bdevs": 3, 00:10:41.538 "num_base_bdevs_discovered": 3, 00:10:41.538 "num_base_bdevs_operational": 3, 00:10:41.538 "base_bdevs_list": [ 00:10:41.538 { 00:10:41.538 "name": "BaseBdev1", 00:10:41.538 "uuid": "53a0d13b-4a95-51a2-b551-7e33eaeb452f", 00:10:41.538 "is_configured": true, 00:10:41.538 "data_offset": 2048, 00:10:41.538 "data_size": 63488 00:10:41.538 }, 00:10:41.538 { 00:10:41.538 "name": "BaseBdev2", 00:10:41.538 "uuid": "38901c38-d607-5a6c-acf5-42569d5c0ae7", 00:10:41.538 "is_configured": true, 00:10:41.538 "data_offset": 2048, 00:10:41.538 "data_size": 63488 00:10:41.538 }, 00:10:41.538 { 00:10:41.538 "name": "BaseBdev3", 00:10:41.538 "uuid": "0030c64c-7cde-5a37-9a88-4200010f0e36", 00:10:41.538 "is_configured": true, 00:10:41.538 "data_offset": 2048, 00:10:41.538 "data_size": 63488 00:10:41.538 } 00:10:41.538 ] 00:10:41.538 }' 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.538 16:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.797 [2024-11-08 16:52:11.291175] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.797 [2024-11-08 16:52:11.291210] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.797 [2024-11-08 16:52:11.293758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.797 [2024-11-08 16:52:11.293811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.797 [2024-11-08 16:52:11.293845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.797 [2024-11-08 16:52:11.293860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:41.797 { 00:10:41.797 "results": [ 00:10:41.797 { 00:10:41.797 "job": "raid_bdev1", 00:10:41.797 "core_mask": "0x1", 00:10:41.797 "workload": "randrw", 00:10:41.797 "percentage": 50, 00:10:41.797 "status": "finished", 00:10:41.797 "queue_depth": 1, 00:10:41.797 "io_size": 131072, 00:10:41.797 "runtime": 1.356996, 00:10:41.797 "iops": 16595.480016153328, 00:10:41.797 "mibps": 2074.435002019166, 00:10:41.797 "io_failed": 1, 00:10:41.797 "io_timeout": 0, 00:10:41.797 "avg_latency_us": 83.44075687533945, 00:10:41.797 "min_latency_us": 26.494323144104804, 00:10:41.797 "max_latency_us": 1452.380786026201 00:10:41.797 } 00:10:41.797 ], 00:10:41.797 "core_count": 1 00:10:41.797 } 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78321 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78321 ']' 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78321 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.797 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78321 00:10:42.055 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.055 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.055 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78321' 00:10:42.055 killing process with pid 78321 00:10:42.055 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78321 00:10:42.055 [2024-11-08 16:52:11.344269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.055 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78321 00:10:42.055 [2024-11-08 16:52:11.369777] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6XorbRXg6W 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:42.315 00:10:42.315 real 0m3.288s 00:10:42.315 user 0m4.168s 00:10:42.315 sys 0m0.525s 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.315 16:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 ************************************ 00:10:42.315 END TEST raid_write_error_test 00:10:42.315 ************************************ 00:10:42.315 16:52:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:42.315 16:52:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:42.315 16:52:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.315 16:52:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.315 16:52:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 ************************************ 00:10:42.315 START TEST raid_state_function_test 00:10:42.315 ************************************ 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:42.315 Process raid pid: 78454 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78454 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78454' 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78454 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78454 ']' 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.315 16:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.315 [2024-11-08 16:52:11.777937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:42.315 [2024-11-08 16:52:11.778099] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.575 [2024-11-08 16:52:11.921775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.575 [2024-11-08 16:52:11.965533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.575 [2024-11-08 16:52:12.007219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.575 [2024-11-08 16:52:12.007344] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.144 [2024-11-08 16:52:12.616406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.144 [2024-11-08 16:52:12.616531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.144 [2024-11-08 16:52:12.616568] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.144 [2024-11-08 16:52:12.616593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.144 [2024-11-08 16:52:12.616629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.144 [2024-11-08 16:52:12.616670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.144 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.403 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.403 "name": "Existed_Raid", 00:10:43.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.403 "strip_size_kb": 0, 00:10:43.403 "state": "configuring", 00:10:43.403 "raid_level": "raid1", 00:10:43.403 "superblock": false, 00:10:43.403 "num_base_bdevs": 3, 00:10:43.403 "num_base_bdevs_discovered": 0, 00:10:43.403 "num_base_bdevs_operational": 3, 00:10:43.403 "base_bdevs_list": [ 00:10:43.403 { 00:10:43.403 "name": "BaseBdev1", 00:10:43.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.403 "is_configured": false, 00:10:43.403 "data_offset": 0, 00:10:43.403 "data_size": 0 00:10:43.403 }, 00:10:43.403 { 00:10:43.403 "name": "BaseBdev2", 00:10:43.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.403 "is_configured": false, 00:10:43.403 "data_offset": 0, 00:10:43.403 "data_size": 0 00:10:43.403 }, 00:10:43.403 { 00:10:43.403 "name": "BaseBdev3", 00:10:43.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.403 "is_configured": false, 00:10:43.403 "data_offset": 0, 00:10:43.403 "data_size": 0 00:10:43.403 } 00:10:43.403 ] 00:10:43.403 }' 00:10:43.403 16:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.403 16:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 [2024-11-08 16:52:13.107488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.663 [2024-11-08 16:52:13.107537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 [2024-11-08 16:52:13.119490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.663 [2024-11-08 16:52:13.119535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.663 [2024-11-08 16:52:13.119544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.663 [2024-11-08 16:52:13.119553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.663 [2024-11-08 16:52:13.119559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.663 [2024-11-08 16:52:13.119567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 [2024-11-08 16:52:13.140446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.663 BaseBdev1 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 [ 00:10:43.663 { 00:10:43.663 "name": "BaseBdev1", 00:10:43.663 "aliases": [ 00:10:43.663 "81f6eb04-9781-43fb-90d5-a277f9d7b62c" 00:10:43.663 ], 00:10:43.663 "product_name": "Malloc disk", 00:10:43.663 "block_size": 512, 00:10:43.663 "num_blocks": 65536, 00:10:43.663 "uuid": "81f6eb04-9781-43fb-90d5-a277f9d7b62c", 00:10:43.663 "assigned_rate_limits": { 00:10:43.663 "rw_ios_per_sec": 0, 00:10:43.663 "rw_mbytes_per_sec": 0, 00:10:43.663 "r_mbytes_per_sec": 0, 00:10:43.663 "w_mbytes_per_sec": 0 00:10:43.663 }, 00:10:43.663 "claimed": true, 00:10:43.663 "claim_type": "exclusive_write", 00:10:43.663 "zoned": false, 00:10:43.663 "supported_io_types": { 00:10:43.663 "read": true, 00:10:43.663 "write": true, 00:10:43.663 "unmap": true, 00:10:43.663 "flush": true, 00:10:43.663 "reset": true, 00:10:43.663 "nvme_admin": false, 00:10:43.663 "nvme_io": false, 00:10:43.663 "nvme_io_md": false, 00:10:43.663 "write_zeroes": true, 00:10:43.663 "zcopy": true, 00:10:43.663 "get_zone_info": false, 00:10:43.663 "zone_management": false, 00:10:43.663 "zone_append": false, 00:10:43.663 "compare": false, 00:10:43.663 "compare_and_write": false, 00:10:43.663 "abort": true, 00:10:43.663 "seek_hole": false, 00:10:43.663 "seek_data": false, 00:10:43.663 "copy": true, 00:10:43.663 "nvme_iov_md": false 00:10:43.663 }, 00:10:43.663 "memory_domains": [ 00:10:43.663 { 00:10:43.663 "dma_device_id": "system", 00:10:43.663 "dma_device_type": 1 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.663 "dma_device_type": 2 00:10:43.663 } 00:10:43.663 ], 00:10:43.663 "driver_specific": {} 00:10:43.663 } 00:10:43.663 ] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.923 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.923 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.923 "name": "Existed_Raid", 00:10:43.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.923 "strip_size_kb": 0, 00:10:43.923 "state": "configuring", 00:10:43.923 "raid_level": "raid1", 00:10:43.923 "superblock": false, 00:10:43.923 "num_base_bdevs": 3, 00:10:43.923 "num_base_bdevs_discovered": 1, 00:10:43.923 "num_base_bdevs_operational": 3, 00:10:43.923 "base_bdevs_list": [ 00:10:43.923 { 00:10:43.923 "name": "BaseBdev1", 00:10:43.923 "uuid": "81f6eb04-9781-43fb-90d5-a277f9d7b62c", 00:10:43.923 "is_configured": true, 00:10:43.923 "data_offset": 0, 00:10:43.923 "data_size": 65536 00:10:43.923 }, 00:10:43.923 { 00:10:43.923 "name": "BaseBdev2", 00:10:43.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.923 "is_configured": false, 00:10:43.923 "data_offset": 0, 00:10:43.923 "data_size": 0 00:10:43.923 }, 00:10:43.923 { 00:10:43.923 "name": "BaseBdev3", 00:10:43.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.923 "is_configured": false, 00:10:43.923 "data_offset": 0, 00:10:43.923 "data_size": 0 00:10:43.923 } 00:10:43.923 ] 00:10:43.923 }' 00:10:43.923 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.923 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.183 [2024-11-08 16:52:13.611720] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.183 [2024-11-08 16:52:13.611843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.183 [2024-11-08 16:52:13.619728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.183 [2024-11-08 16:52:13.621710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.183 [2024-11-08 16:52:13.621785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.183 [2024-11-08 16:52:13.621814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.183 [2024-11-08 16:52:13.621839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.183 "name": "Existed_Raid", 00:10:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.183 "strip_size_kb": 0, 00:10:44.183 "state": "configuring", 00:10:44.183 "raid_level": "raid1", 00:10:44.183 "superblock": false, 00:10:44.183 "num_base_bdevs": 3, 00:10:44.183 "num_base_bdevs_discovered": 1, 00:10:44.183 "num_base_bdevs_operational": 3, 00:10:44.183 "base_bdevs_list": [ 00:10:44.183 { 00:10:44.183 "name": "BaseBdev1", 00:10:44.183 "uuid": "81f6eb04-9781-43fb-90d5-a277f9d7b62c", 00:10:44.183 "is_configured": true, 00:10:44.183 "data_offset": 0, 00:10:44.183 "data_size": 65536 00:10:44.183 }, 00:10:44.183 { 00:10:44.183 "name": "BaseBdev2", 00:10:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.183 "is_configured": false, 00:10:44.183 "data_offset": 0, 00:10:44.183 "data_size": 0 00:10:44.183 }, 00:10:44.183 { 00:10:44.183 "name": "BaseBdev3", 00:10:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.183 "is_configured": false, 00:10:44.183 "data_offset": 0, 00:10:44.183 "data_size": 0 00:10:44.183 } 00:10:44.183 ] 00:10:44.183 }' 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.183 16:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.752 [2024-11-08 16:52:14.042087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.752 BaseBdev2 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.752 [ 00:10:44.752 { 00:10:44.752 "name": "BaseBdev2", 00:10:44.752 "aliases": [ 00:10:44.752 "326d81d3-622f-485e-869b-b91502fba3c4" 00:10:44.752 ], 00:10:44.752 "product_name": "Malloc disk", 00:10:44.752 "block_size": 512, 00:10:44.752 "num_blocks": 65536, 00:10:44.752 "uuid": "326d81d3-622f-485e-869b-b91502fba3c4", 00:10:44.752 "assigned_rate_limits": { 00:10:44.752 "rw_ios_per_sec": 0, 00:10:44.752 "rw_mbytes_per_sec": 0, 00:10:44.752 "r_mbytes_per_sec": 0, 00:10:44.752 "w_mbytes_per_sec": 0 00:10:44.752 }, 00:10:44.752 "claimed": true, 00:10:44.752 "claim_type": "exclusive_write", 00:10:44.752 "zoned": false, 00:10:44.752 "supported_io_types": { 00:10:44.752 "read": true, 00:10:44.752 "write": true, 00:10:44.752 "unmap": true, 00:10:44.752 "flush": true, 00:10:44.752 "reset": true, 00:10:44.752 "nvme_admin": false, 00:10:44.752 "nvme_io": false, 00:10:44.752 "nvme_io_md": false, 00:10:44.752 "write_zeroes": true, 00:10:44.752 "zcopy": true, 00:10:44.752 "get_zone_info": false, 00:10:44.752 "zone_management": false, 00:10:44.752 "zone_append": false, 00:10:44.752 "compare": false, 00:10:44.752 "compare_and_write": false, 00:10:44.752 "abort": true, 00:10:44.752 "seek_hole": false, 00:10:44.752 "seek_data": false, 00:10:44.752 "copy": true, 00:10:44.752 "nvme_iov_md": false 00:10:44.752 }, 00:10:44.752 "memory_domains": [ 00:10:44.752 { 00:10:44.752 "dma_device_id": "system", 00:10:44.752 "dma_device_type": 1 00:10:44.752 }, 00:10:44.752 { 00:10:44.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.752 "dma_device_type": 2 00:10:44.752 } 00:10:44.752 ], 00:10:44.752 "driver_specific": {} 00:10:44.752 } 00:10:44.752 ] 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.752 "name": "Existed_Raid", 00:10:44.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.752 "strip_size_kb": 0, 00:10:44.752 "state": "configuring", 00:10:44.752 "raid_level": "raid1", 00:10:44.752 "superblock": false, 00:10:44.752 "num_base_bdevs": 3, 00:10:44.752 "num_base_bdevs_discovered": 2, 00:10:44.752 "num_base_bdevs_operational": 3, 00:10:44.752 "base_bdevs_list": [ 00:10:44.752 { 00:10:44.752 "name": "BaseBdev1", 00:10:44.752 "uuid": "81f6eb04-9781-43fb-90d5-a277f9d7b62c", 00:10:44.752 "is_configured": true, 00:10:44.752 "data_offset": 0, 00:10:44.752 "data_size": 65536 00:10:44.752 }, 00:10:44.752 { 00:10:44.752 "name": "BaseBdev2", 00:10:44.752 "uuid": "326d81d3-622f-485e-869b-b91502fba3c4", 00:10:44.752 "is_configured": true, 00:10:44.752 "data_offset": 0, 00:10:44.752 "data_size": 65536 00:10:44.752 }, 00:10:44.752 { 00:10:44.752 "name": "BaseBdev3", 00:10:44.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.752 "is_configured": false, 00:10:44.752 "data_offset": 0, 00:10:44.752 "data_size": 0 00:10:44.752 } 00:10:44.752 ] 00:10:44.752 }' 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.752 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.012 [2024-11-08 16:52:14.516337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.012 [2024-11-08 16:52:14.516396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:45.012 [2024-11-08 16:52:14.516407] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:45.012 [2024-11-08 16:52:14.516729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:45.012 [2024-11-08 16:52:14.516876] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:45.012 [2024-11-08 16:52:14.516887] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:45.012 [2024-11-08 16:52:14.517085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.012 BaseBdev3 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.012 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.012 [ 00:10:45.012 { 00:10:45.012 "name": "BaseBdev3", 00:10:45.012 "aliases": [ 00:10:45.012 "10ae6d49-ad59-4209-8a93-af34d3208cd3" 00:10:45.012 ], 00:10:45.012 "product_name": "Malloc disk", 00:10:45.012 "block_size": 512, 00:10:45.012 "num_blocks": 65536, 00:10:45.280 "uuid": "10ae6d49-ad59-4209-8a93-af34d3208cd3", 00:10:45.281 "assigned_rate_limits": { 00:10:45.281 "rw_ios_per_sec": 0, 00:10:45.281 "rw_mbytes_per_sec": 0, 00:10:45.281 "r_mbytes_per_sec": 0, 00:10:45.281 "w_mbytes_per_sec": 0 00:10:45.281 }, 00:10:45.281 "claimed": true, 00:10:45.281 "claim_type": "exclusive_write", 00:10:45.281 "zoned": false, 00:10:45.281 "supported_io_types": { 00:10:45.281 "read": true, 00:10:45.281 "write": true, 00:10:45.281 "unmap": true, 00:10:45.281 "flush": true, 00:10:45.281 "reset": true, 00:10:45.281 "nvme_admin": false, 00:10:45.281 "nvme_io": false, 00:10:45.281 "nvme_io_md": false, 00:10:45.281 "write_zeroes": true, 00:10:45.281 "zcopy": true, 00:10:45.281 "get_zone_info": false, 00:10:45.281 "zone_management": false, 00:10:45.281 "zone_append": false, 00:10:45.281 "compare": false, 00:10:45.281 "compare_and_write": false, 00:10:45.281 "abort": true, 00:10:45.281 "seek_hole": false, 00:10:45.281 "seek_data": false, 00:10:45.281 "copy": true, 00:10:45.281 "nvme_iov_md": false 00:10:45.281 }, 00:10:45.281 "memory_domains": [ 00:10:45.281 { 00:10:45.281 "dma_device_id": "system", 00:10:45.281 "dma_device_type": 1 00:10:45.281 }, 00:10:45.281 { 00:10:45.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.281 "dma_device_type": 2 00:10:45.281 } 00:10:45.281 ], 00:10:45.281 "driver_specific": {} 00:10:45.281 } 00:10:45.281 ] 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.281 "name": "Existed_Raid", 00:10:45.281 "uuid": "082b759e-deca-4e17-8231-ff6c084e71fb", 00:10:45.281 "strip_size_kb": 0, 00:10:45.281 "state": "online", 00:10:45.281 "raid_level": "raid1", 00:10:45.281 "superblock": false, 00:10:45.281 "num_base_bdevs": 3, 00:10:45.281 "num_base_bdevs_discovered": 3, 00:10:45.281 "num_base_bdevs_operational": 3, 00:10:45.281 "base_bdevs_list": [ 00:10:45.281 { 00:10:45.281 "name": "BaseBdev1", 00:10:45.281 "uuid": "81f6eb04-9781-43fb-90d5-a277f9d7b62c", 00:10:45.281 "is_configured": true, 00:10:45.281 "data_offset": 0, 00:10:45.281 "data_size": 65536 00:10:45.281 }, 00:10:45.281 { 00:10:45.281 "name": "BaseBdev2", 00:10:45.281 "uuid": "326d81d3-622f-485e-869b-b91502fba3c4", 00:10:45.281 "is_configured": true, 00:10:45.281 "data_offset": 0, 00:10:45.281 "data_size": 65536 00:10:45.281 }, 00:10:45.281 { 00:10:45.281 "name": "BaseBdev3", 00:10:45.281 "uuid": "10ae6d49-ad59-4209-8a93-af34d3208cd3", 00:10:45.281 "is_configured": true, 00:10:45.281 "data_offset": 0, 00:10:45.281 "data_size": 65536 00:10:45.281 } 00:10:45.281 ] 00:10:45.281 }' 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.281 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.556 [2024-11-08 16:52:14.940022] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.556 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.556 "name": "Existed_Raid", 00:10:45.556 "aliases": [ 00:10:45.556 "082b759e-deca-4e17-8231-ff6c084e71fb" 00:10:45.556 ], 00:10:45.556 "product_name": "Raid Volume", 00:10:45.556 "block_size": 512, 00:10:45.556 "num_blocks": 65536, 00:10:45.556 "uuid": "082b759e-deca-4e17-8231-ff6c084e71fb", 00:10:45.556 "assigned_rate_limits": { 00:10:45.556 "rw_ios_per_sec": 0, 00:10:45.556 "rw_mbytes_per_sec": 0, 00:10:45.556 "r_mbytes_per_sec": 0, 00:10:45.556 "w_mbytes_per_sec": 0 00:10:45.556 }, 00:10:45.556 "claimed": false, 00:10:45.556 "zoned": false, 00:10:45.556 "supported_io_types": { 00:10:45.556 "read": true, 00:10:45.556 "write": true, 00:10:45.556 "unmap": false, 00:10:45.556 "flush": false, 00:10:45.556 "reset": true, 00:10:45.556 "nvme_admin": false, 00:10:45.556 "nvme_io": false, 00:10:45.556 "nvme_io_md": false, 00:10:45.556 "write_zeroes": true, 00:10:45.556 "zcopy": false, 00:10:45.556 "get_zone_info": false, 00:10:45.556 "zone_management": false, 00:10:45.556 "zone_append": false, 00:10:45.556 "compare": false, 00:10:45.556 "compare_and_write": false, 00:10:45.556 "abort": false, 00:10:45.556 "seek_hole": false, 00:10:45.556 "seek_data": false, 00:10:45.556 "copy": false, 00:10:45.557 "nvme_iov_md": false 00:10:45.557 }, 00:10:45.557 "memory_domains": [ 00:10:45.557 { 00:10:45.557 "dma_device_id": "system", 00:10:45.557 "dma_device_type": 1 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.557 "dma_device_type": 2 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "dma_device_id": "system", 00:10:45.557 "dma_device_type": 1 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.557 "dma_device_type": 2 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "dma_device_id": "system", 00:10:45.557 "dma_device_type": 1 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.557 "dma_device_type": 2 00:10:45.557 } 00:10:45.557 ], 00:10:45.557 "driver_specific": { 00:10:45.557 "raid": { 00:10:45.557 "uuid": "082b759e-deca-4e17-8231-ff6c084e71fb", 00:10:45.557 "strip_size_kb": 0, 00:10:45.557 "state": "online", 00:10:45.557 "raid_level": "raid1", 00:10:45.557 "superblock": false, 00:10:45.557 "num_base_bdevs": 3, 00:10:45.557 "num_base_bdevs_discovered": 3, 00:10:45.557 "num_base_bdevs_operational": 3, 00:10:45.557 "base_bdevs_list": [ 00:10:45.557 { 00:10:45.557 "name": "BaseBdev1", 00:10:45.557 "uuid": "81f6eb04-9781-43fb-90d5-a277f9d7b62c", 00:10:45.557 "is_configured": true, 00:10:45.557 "data_offset": 0, 00:10:45.557 "data_size": 65536 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "name": "BaseBdev2", 00:10:45.557 "uuid": "326d81d3-622f-485e-869b-b91502fba3c4", 00:10:45.557 "is_configured": true, 00:10:45.557 "data_offset": 0, 00:10:45.557 "data_size": 65536 00:10:45.557 }, 00:10:45.557 { 00:10:45.557 "name": "BaseBdev3", 00:10:45.557 "uuid": "10ae6d49-ad59-4209-8a93-af34d3208cd3", 00:10:45.557 "is_configured": true, 00:10:45.557 "data_offset": 0, 00:10:45.557 "data_size": 65536 00:10:45.557 } 00:10:45.557 ] 00:10:45.557 } 00:10:45.557 } 00:10:45.557 }' 00:10:45.557 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.557 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.557 BaseBdev2 00:10:45.557 BaseBdev3' 00:10:45.557 16:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.557 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.817 [2024-11-08 16:52:15.203292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.817 "name": "Existed_Raid", 00:10:45.817 "uuid": "082b759e-deca-4e17-8231-ff6c084e71fb", 00:10:45.817 "strip_size_kb": 0, 00:10:45.817 "state": "online", 00:10:45.817 "raid_level": "raid1", 00:10:45.817 "superblock": false, 00:10:45.817 "num_base_bdevs": 3, 00:10:45.817 "num_base_bdevs_discovered": 2, 00:10:45.817 "num_base_bdevs_operational": 2, 00:10:45.817 "base_bdevs_list": [ 00:10:45.817 { 00:10:45.817 "name": null, 00:10:45.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.817 "is_configured": false, 00:10:45.817 "data_offset": 0, 00:10:45.817 "data_size": 65536 00:10:45.817 }, 00:10:45.817 { 00:10:45.817 "name": "BaseBdev2", 00:10:45.817 "uuid": "326d81d3-622f-485e-869b-b91502fba3c4", 00:10:45.817 "is_configured": true, 00:10:45.817 "data_offset": 0, 00:10:45.817 "data_size": 65536 00:10:45.817 }, 00:10:45.817 { 00:10:45.817 "name": "BaseBdev3", 00:10:45.817 "uuid": "10ae6d49-ad59-4209-8a93-af34d3208cd3", 00:10:45.817 "is_configured": true, 00:10:45.817 "data_offset": 0, 00:10:45.817 "data_size": 65536 00:10:45.817 } 00:10:45.817 ] 00:10:45.817 }' 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.817 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.386 [2024-11-08 16:52:15.722259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.386 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 [2024-11-08 16:52:15.793775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.387 [2024-11-08 16:52:15.793921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.387 [2024-11-08 16:52:15.805626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.387 [2024-11-08 16:52:15.805749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.387 [2024-11-08 16:52:15.805794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 BaseBdev2 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 [ 00:10:46.387 { 00:10:46.387 "name": "BaseBdev2", 00:10:46.387 "aliases": [ 00:10:46.387 "1f7d0ea1-6d0b-46dd-9054-871672b679a8" 00:10:46.387 ], 00:10:46.387 "product_name": "Malloc disk", 00:10:46.387 "block_size": 512, 00:10:46.387 "num_blocks": 65536, 00:10:46.387 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:46.387 "assigned_rate_limits": { 00:10:46.387 "rw_ios_per_sec": 0, 00:10:46.387 "rw_mbytes_per_sec": 0, 00:10:46.387 "r_mbytes_per_sec": 0, 00:10:46.387 "w_mbytes_per_sec": 0 00:10:46.387 }, 00:10:46.387 "claimed": false, 00:10:46.387 "zoned": false, 00:10:46.387 "supported_io_types": { 00:10:46.387 "read": true, 00:10:46.387 "write": true, 00:10:46.387 "unmap": true, 00:10:46.387 "flush": true, 00:10:46.387 "reset": true, 00:10:46.387 "nvme_admin": false, 00:10:46.387 "nvme_io": false, 00:10:46.387 "nvme_io_md": false, 00:10:46.387 "write_zeroes": true, 00:10:46.387 "zcopy": true, 00:10:46.387 "get_zone_info": false, 00:10:46.387 "zone_management": false, 00:10:46.387 "zone_append": false, 00:10:46.387 "compare": false, 00:10:46.387 "compare_and_write": false, 00:10:46.387 "abort": true, 00:10:46.387 "seek_hole": false, 00:10:46.387 "seek_data": false, 00:10:46.387 "copy": true, 00:10:46.387 "nvme_iov_md": false 00:10:46.387 }, 00:10:46.387 "memory_domains": [ 00:10:46.387 { 00:10:46.387 "dma_device_id": "system", 00:10:46.387 "dma_device_type": 1 00:10:46.387 }, 00:10:46.387 { 00:10:46.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.387 "dma_device_type": 2 00:10:46.387 } 00:10:46.387 ], 00:10:46.387 "driver_specific": {} 00:10:46.387 } 00:10:46.387 ] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.387 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.647 BaseBdev3 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.647 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.647 [ 00:10:46.647 { 00:10:46.647 "name": "BaseBdev3", 00:10:46.647 "aliases": [ 00:10:46.647 "2f34c644-a44f-47ce-ab95-66f419948535" 00:10:46.647 ], 00:10:46.647 "product_name": "Malloc disk", 00:10:46.647 "block_size": 512, 00:10:46.647 "num_blocks": 65536, 00:10:46.647 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:46.647 "assigned_rate_limits": { 00:10:46.647 "rw_ios_per_sec": 0, 00:10:46.647 "rw_mbytes_per_sec": 0, 00:10:46.648 "r_mbytes_per_sec": 0, 00:10:46.648 "w_mbytes_per_sec": 0 00:10:46.648 }, 00:10:46.648 "claimed": false, 00:10:46.648 "zoned": false, 00:10:46.648 "supported_io_types": { 00:10:46.648 "read": true, 00:10:46.648 "write": true, 00:10:46.648 "unmap": true, 00:10:46.648 "flush": true, 00:10:46.648 "reset": true, 00:10:46.648 "nvme_admin": false, 00:10:46.648 "nvme_io": false, 00:10:46.648 "nvme_io_md": false, 00:10:46.648 "write_zeroes": true, 00:10:46.648 "zcopy": true, 00:10:46.648 "get_zone_info": false, 00:10:46.648 "zone_management": false, 00:10:46.648 "zone_append": false, 00:10:46.648 "compare": false, 00:10:46.648 "compare_and_write": false, 00:10:46.648 "abort": true, 00:10:46.648 "seek_hole": false, 00:10:46.648 "seek_data": false, 00:10:46.648 "copy": true, 00:10:46.648 "nvme_iov_md": false 00:10:46.648 }, 00:10:46.648 "memory_domains": [ 00:10:46.648 { 00:10:46.648 "dma_device_id": "system", 00:10:46.648 "dma_device_type": 1 00:10:46.648 }, 00:10:46.648 { 00:10:46.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.648 "dma_device_type": 2 00:10:46.648 } 00:10:46.648 ], 00:10:46.648 "driver_specific": {} 00:10:46.648 } 00:10:46.648 ] 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.648 [2024-11-08 16:52:15.971280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.648 [2024-11-08 16:52:15.971388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.648 [2024-11-08 16:52:15.971438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.648 [2024-11-08 16:52:15.973514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.648 16:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.648 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.648 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.648 "name": "Existed_Raid", 00:10:46.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.648 "strip_size_kb": 0, 00:10:46.648 "state": "configuring", 00:10:46.648 "raid_level": "raid1", 00:10:46.648 "superblock": false, 00:10:46.648 "num_base_bdevs": 3, 00:10:46.648 "num_base_bdevs_discovered": 2, 00:10:46.648 "num_base_bdevs_operational": 3, 00:10:46.648 "base_bdevs_list": [ 00:10:46.648 { 00:10:46.648 "name": "BaseBdev1", 00:10:46.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.648 "is_configured": false, 00:10:46.648 "data_offset": 0, 00:10:46.648 "data_size": 0 00:10:46.648 }, 00:10:46.648 { 00:10:46.648 "name": "BaseBdev2", 00:10:46.648 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:46.648 "is_configured": true, 00:10:46.648 "data_offset": 0, 00:10:46.648 "data_size": 65536 00:10:46.648 }, 00:10:46.648 { 00:10:46.648 "name": "BaseBdev3", 00:10:46.648 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:46.648 "is_configured": true, 00:10:46.648 "data_offset": 0, 00:10:46.648 "data_size": 65536 00:10:46.648 } 00:10:46.648 ] 00:10:46.648 }' 00:10:46.648 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.648 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.217 [2024-11-08 16:52:16.450591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.217 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.217 "name": "Existed_Raid", 00:10:47.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.217 "strip_size_kb": 0, 00:10:47.217 "state": "configuring", 00:10:47.217 "raid_level": "raid1", 00:10:47.217 "superblock": false, 00:10:47.217 "num_base_bdevs": 3, 00:10:47.217 "num_base_bdevs_discovered": 1, 00:10:47.217 "num_base_bdevs_operational": 3, 00:10:47.217 "base_bdevs_list": [ 00:10:47.217 { 00:10:47.217 "name": "BaseBdev1", 00:10:47.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.217 "is_configured": false, 00:10:47.217 "data_offset": 0, 00:10:47.217 "data_size": 0 00:10:47.217 }, 00:10:47.217 { 00:10:47.217 "name": null, 00:10:47.217 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:47.217 "is_configured": false, 00:10:47.217 "data_offset": 0, 00:10:47.217 "data_size": 65536 00:10:47.217 }, 00:10:47.217 { 00:10:47.218 "name": "BaseBdev3", 00:10:47.218 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:47.218 "is_configured": true, 00:10:47.218 "data_offset": 0, 00:10:47.218 "data_size": 65536 00:10:47.218 } 00:10:47.218 ] 00:10:47.218 }' 00:10:47.218 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.218 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.476 [2024-11-08 16:52:16.972645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.476 BaseBdev1 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.476 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.477 [ 00:10:47.477 { 00:10:47.477 "name": "BaseBdev1", 00:10:47.477 "aliases": [ 00:10:47.477 "4e90859d-0865-4b4a-832e-0f64544ecf36" 00:10:47.477 ], 00:10:47.477 "product_name": "Malloc disk", 00:10:47.477 "block_size": 512, 00:10:47.477 "num_blocks": 65536, 00:10:47.477 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:47.477 "assigned_rate_limits": { 00:10:47.477 "rw_ios_per_sec": 0, 00:10:47.477 "rw_mbytes_per_sec": 0, 00:10:47.477 "r_mbytes_per_sec": 0, 00:10:47.477 "w_mbytes_per_sec": 0 00:10:47.477 }, 00:10:47.477 "claimed": true, 00:10:47.477 "claim_type": "exclusive_write", 00:10:47.477 "zoned": false, 00:10:47.477 "supported_io_types": { 00:10:47.477 "read": true, 00:10:47.477 "write": true, 00:10:47.477 "unmap": true, 00:10:47.477 "flush": true, 00:10:47.477 "reset": true, 00:10:47.477 "nvme_admin": false, 00:10:47.477 "nvme_io": false, 00:10:47.477 "nvme_io_md": false, 00:10:47.477 "write_zeroes": true, 00:10:47.477 "zcopy": true, 00:10:47.477 "get_zone_info": false, 00:10:47.477 "zone_management": false, 00:10:47.477 "zone_append": false, 00:10:47.477 "compare": false, 00:10:47.477 "compare_and_write": false, 00:10:47.477 "abort": true, 00:10:47.477 "seek_hole": false, 00:10:47.477 "seek_data": false, 00:10:47.477 "copy": true, 00:10:47.477 "nvme_iov_md": false 00:10:47.477 }, 00:10:47.477 "memory_domains": [ 00:10:47.477 { 00:10:47.477 "dma_device_id": "system", 00:10:47.477 "dma_device_type": 1 00:10:47.477 }, 00:10:47.477 { 00:10:47.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.477 "dma_device_type": 2 00:10:47.477 } 00:10:47.477 ], 00:10:47.477 "driver_specific": {} 00:10:47.477 } 00:10:47.477 ] 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.477 16:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.477 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.477 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.477 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.768 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.768 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.768 "name": "Existed_Raid", 00:10:47.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.768 "strip_size_kb": 0, 00:10:47.768 "state": "configuring", 00:10:47.768 "raid_level": "raid1", 00:10:47.768 "superblock": false, 00:10:47.768 "num_base_bdevs": 3, 00:10:47.768 "num_base_bdevs_discovered": 2, 00:10:47.768 "num_base_bdevs_operational": 3, 00:10:47.768 "base_bdevs_list": [ 00:10:47.768 { 00:10:47.768 "name": "BaseBdev1", 00:10:47.768 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 65536 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": null, 00:10:47.768 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:47.768 "is_configured": false, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 65536 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev3", 00:10:47.768 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 65536 00:10:47.768 } 00:10:47.768 ] 00:10:47.768 }' 00:10:47.768 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.768 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 [2024-11-08 16:52:17.471835] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.028 "name": "Existed_Raid", 00:10:48.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.028 "strip_size_kb": 0, 00:10:48.028 "state": "configuring", 00:10:48.028 "raid_level": "raid1", 00:10:48.028 "superblock": false, 00:10:48.028 "num_base_bdevs": 3, 00:10:48.028 "num_base_bdevs_discovered": 1, 00:10:48.028 "num_base_bdevs_operational": 3, 00:10:48.028 "base_bdevs_list": [ 00:10:48.028 { 00:10:48.028 "name": "BaseBdev1", 00:10:48.028 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:48.028 "is_configured": true, 00:10:48.028 "data_offset": 0, 00:10:48.028 "data_size": 65536 00:10:48.028 }, 00:10:48.028 { 00:10:48.028 "name": null, 00:10:48.028 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:48.028 "is_configured": false, 00:10:48.028 "data_offset": 0, 00:10:48.028 "data_size": 65536 00:10:48.028 }, 00:10:48.028 { 00:10:48.028 "name": null, 00:10:48.028 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:48.028 "is_configured": false, 00:10:48.028 "data_offset": 0, 00:10:48.028 "data_size": 65536 00:10:48.028 } 00:10:48.028 ] 00:10:48.028 }' 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.028 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.595 [2024-11-08 16:52:17.875235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.595 "name": "Existed_Raid", 00:10:48.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.595 "strip_size_kb": 0, 00:10:48.595 "state": "configuring", 00:10:48.595 "raid_level": "raid1", 00:10:48.595 "superblock": false, 00:10:48.595 "num_base_bdevs": 3, 00:10:48.595 "num_base_bdevs_discovered": 2, 00:10:48.595 "num_base_bdevs_operational": 3, 00:10:48.595 "base_bdevs_list": [ 00:10:48.595 { 00:10:48.595 "name": "BaseBdev1", 00:10:48.595 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:48.595 "is_configured": true, 00:10:48.595 "data_offset": 0, 00:10:48.595 "data_size": 65536 00:10:48.595 }, 00:10:48.595 { 00:10:48.595 "name": null, 00:10:48.595 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:48.595 "is_configured": false, 00:10:48.595 "data_offset": 0, 00:10:48.595 "data_size": 65536 00:10:48.595 }, 00:10:48.595 { 00:10:48.595 "name": "BaseBdev3", 00:10:48.595 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:48.595 "is_configured": true, 00:10:48.595 "data_offset": 0, 00:10:48.595 "data_size": 65536 00:10:48.595 } 00:10:48.595 ] 00:10:48.595 }' 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.595 16:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.853 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.853 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.853 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.853 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.853 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.854 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:48.854 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.854 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.854 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.112 [2024-11-08 16:52:18.382371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.112 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.113 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.113 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.113 "name": "Existed_Raid", 00:10:49.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.113 "strip_size_kb": 0, 00:10:49.113 "state": "configuring", 00:10:49.113 "raid_level": "raid1", 00:10:49.113 "superblock": false, 00:10:49.113 "num_base_bdevs": 3, 00:10:49.113 "num_base_bdevs_discovered": 1, 00:10:49.113 "num_base_bdevs_operational": 3, 00:10:49.113 "base_bdevs_list": [ 00:10:49.113 { 00:10:49.113 "name": null, 00:10:49.113 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:49.113 "is_configured": false, 00:10:49.113 "data_offset": 0, 00:10:49.113 "data_size": 65536 00:10:49.113 }, 00:10:49.113 { 00:10:49.113 "name": null, 00:10:49.113 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:49.113 "is_configured": false, 00:10:49.113 "data_offset": 0, 00:10:49.113 "data_size": 65536 00:10:49.113 }, 00:10:49.113 { 00:10:49.113 "name": "BaseBdev3", 00:10:49.113 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:49.113 "is_configured": true, 00:10:49.113 "data_offset": 0, 00:10:49.113 "data_size": 65536 00:10:49.113 } 00:10:49.113 ] 00:10:49.113 }' 00:10:49.113 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.113 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.371 [2024-11-08 16:52:18.892215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.371 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.630 "name": "Existed_Raid", 00:10:49.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.630 "strip_size_kb": 0, 00:10:49.630 "state": "configuring", 00:10:49.630 "raid_level": "raid1", 00:10:49.630 "superblock": false, 00:10:49.630 "num_base_bdevs": 3, 00:10:49.630 "num_base_bdevs_discovered": 2, 00:10:49.630 "num_base_bdevs_operational": 3, 00:10:49.630 "base_bdevs_list": [ 00:10:49.630 { 00:10:49.630 "name": null, 00:10:49.630 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:49.630 "is_configured": false, 00:10:49.630 "data_offset": 0, 00:10:49.630 "data_size": 65536 00:10:49.630 }, 00:10:49.630 { 00:10:49.630 "name": "BaseBdev2", 00:10:49.630 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:49.630 "is_configured": true, 00:10:49.630 "data_offset": 0, 00:10:49.630 "data_size": 65536 00:10:49.630 }, 00:10:49.630 { 00:10:49.630 "name": "BaseBdev3", 00:10:49.630 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:49.630 "is_configured": true, 00:10:49.630 "data_offset": 0, 00:10:49.630 "data_size": 65536 00:10:49.630 } 00:10:49.630 ] 00:10:49.630 }' 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.630 16:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4e90859d-0865-4b4a-832e-0f64544ecf36 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.889 [2024-11-08 16:52:19.394224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:49.889 [2024-11-08 16:52:19.394272] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:49.889 [2024-11-08 16:52:19.394280] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:49.889 [2024-11-08 16:52:19.394521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:49.889 [2024-11-08 16:52:19.394677] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:49.889 [2024-11-08 16:52:19.394693] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:49.889 [2024-11-08 16:52:19.394871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.889 NewBaseBdev 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.889 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.148 [ 00:10:50.148 { 00:10:50.148 "name": "NewBaseBdev", 00:10:50.148 "aliases": [ 00:10:50.148 "4e90859d-0865-4b4a-832e-0f64544ecf36" 00:10:50.148 ], 00:10:50.148 "product_name": "Malloc disk", 00:10:50.148 "block_size": 512, 00:10:50.148 "num_blocks": 65536, 00:10:50.148 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:50.148 "assigned_rate_limits": { 00:10:50.148 "rw_ios_per_sec": 0, 00:10:50.148 "rw_mbytes_per_sec": 0, 00:10:50.148 "r_mbytes_per_sec": 0, 00:10:50.148 "w_mbytes_per_sec": 0 00:10:50.148 }, 00:10:50.148 "claimed": true, 00:10:50.148 "claim_type": "exclusive_write", 00:10:50.148 "zoned": false, 00:10:50.148 "supported_io_types": { 00:10:50.148 "read": true, 00:10:50.148 "write": true, 00:10:50.148 "unmap": true, 00:10:50.148 "flush": true, 00:10:50.148 "reset": true, 00:10:50.148 "nvme_admin": false, 00:10:50.148 "nvme_io": false, 00:10:50.148 "nvme_io_md": false, 00:10:50.148 "write_zeroes": true, 00:10:50.148 "zcopy": true, 00:10:50.148 "get_zone_info": false, 00:10:50.148 "zone_management": false, 00:10:50.148 "zone_append": false, 00:10:50.148 "compare": false, 00:10:50.148 "compare_and_write": false, 00:10:50.148 "abort": true, 00:10:50.148 "seek_hole": false, 00:10:50.148 "seek_data": false, 00:10:50.148 "copy": true, 00:10:50.148 "nvme_iov_md": false 00:10:50.148 }, 00:10:50.148 "memory_domains": [ 00:10:50.148 { 00:10:50.148 "dma_device_id": "system", 00:10:50.148 "dma_device_type": 1 00:10:50.148 }, 00:10:50.148 { 00:10:50.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.148 "dma_device_type": 2 00:10:50.148 } 00:10:50.148 ], 00:10:50.148 "driver_specific": {} 00:10:50.148 } 00:10:50.148 ] 00:10:50.148 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.148 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.148 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:50.148 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.148 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.148 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.149 "name": "Existed_Raid", 00:10:50.149 "uuid": "9724c71b-4079-49a8-9805-63acc79f00c3", 00:10:50.149 "strip_size_kb": 0, 00:10:50.149 "state": "online", 00:10:50.149 "raid_level": "raid1", 00:10:50.149 "superblock": false, 00:10:50.149 "num_base_bdevs": 3, 00:10:50.149 "num_base_bdevs_discovered": 3, 00:10:50.149 "num_base_bdevs_operational": 3, 00:10:50.149 "base_bdevs_list": [ 00:10:50.149 { 00:10:50.149 "name": "NewBaseBdev", 00:10:50.149 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:50.149 "is_configured": true, 00:10:50.149 "data_offset": 0, 00:10:50.149 "data_size": 65536 00:10:50.149 }, 00:10:50.149 { 00:10:50.149 "name": "BaseBdev2", 00:10:50.149 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:50.149 "is_configured": true, 00:10:50.149 "data_offset": 0, 00:10:50.149 "data_size": 65536 00:10:50.149 }, 00:10:50.149 { 00:10:50.149 "name": "BaseBdev3", 00:10:50.149 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:50.149 "is_configured": true, 00:10:50.149 "data_offset": 0, 00:10:50.149 "data_size": 65536 00:10:50.149 } 00:10:50.149 ] 00:10:50.149 }' 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.149 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.407 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.407 [2024-11-08 16:52:19.925702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.664 16:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.664 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.664 "name": "Existed_Raid", 00:10:50.665 "aliases": [ 00:10:50.665 "9724c71b-4079-49a8-9805-63acc79f00c3" 00:10:50.665 ], 00:10:50.665 "product_name": "Raid Volume", 00:10:50.665 "block_size": 512, 00:10:50.665 "num_blocks": 65536, 00:10:50.665 "uuid": "9724c71b-4079-49a8-9805-63acc79f00c3", 00:10:50.665 "assigned_rate_limits": { 00:10:50.665 "rw_ios_per_sec": 0, 00:10:50.665 "rw_mbytes_per_sec": 0, 00:10:50.665 "r_mbytes_per_sec": 0, 00:10:50.665 "w_mbytes_per_sec": 0 00:10:50.665 }, 00:10:50.665 "claimed": false, 00:10:50.665 "zoned": false, 00:10:50.665 "supported_io_types": { 00:10:50.665 "read": true, 00:10:50.665 "write": true, 00:10:50.665 "unmap": false, 00:10:50.665 "flush": false, 00:10:50.665 "reset": true, 00:10:50.665 "nvme_admin": false, 00:10:50.665 "nvme_io": false, 00:10:50.665 "nvme_io_md": false, 00:10:50.665 "write_zeroes": true, 00:10:50.665 "zcopy": false, 00:10:50.665 "get_zone_info": false, 00:10:50.665 "zone_management": false, 00:10:50.665 "zone_append": false, 00:10:50.665 "compare": false, 00:10:50.665 "compare_and_write": false, 00:10:50.665 "abort": false, 00:10:50.665 "seek_hole": false, 00:10:50.665 "seek_data": false, 00:10:50.665 "copy": false, 00:10:50.665 "nvme_iov_md": false 00:10:50.665 }, 00:10:50.665 "memory_domains": [ 00:10:50.665 { 00:10:50.665 "dma_device_id": "system", 00:10:50.665 "dma_device_type": 1 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.665 "dma_device_type": 2 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "dma_device_id": "system", 00:10:50.665 "dma_device_type": 1 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.665 "dma_device_type": 2 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "dma_device_id": "system", 00:10:50.665 "dma_device_type": 1 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.665 "dma_device_type": 2 00:10:50.665 } 00:10:50.665 ], 00:10:50.665 "driver_specific": { 00:10:50.665 "raid": { 00:10:50.665 "uuid": "9724c71b-4079-49a8-9805-63acc79f00c3", 00:10:50.665 "strip_size_kb": 0, 00:10:50.665 "state": "online", 00:10:50.665 "raid_level": "raid1", 00:10:50.665 "superblock": false, 00:10:50.665 "num_base_bdevs": 3, 00:10:50.665 "num_base_bdevs_discovered": 3, 00:10:50.665 "num_base_bdevs_operational": 3, 00:10:50.665 "base_bdevs_list": [ 00:10:50.665 { 00:10:50.665 "name": "NewBaseBdev", 00:10:50.665 "uuid": "4e90859d-0865-4b4a-832e-0f64544ecf36", 00:10:50.665 "is_configured": true, 00:10:50.665 "data_offset": 0, 00:10:50.665 "data_size": 65536 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "name": "BaseBdev2", 00:10:50.665 "uuid": "1f7d0ea1-6d0b-46dd-9054-871672b679a8", 00:10:50.665 "is_configured": true, 00:10:50.665 "data_offset": 0, 00:10:50.665 "data_size": 65536 00:10:50.665 }, 00:10:50.665 { 00:10:50.665 "name": "BaseBdev3", 00:10:50.665 "uuid": "2f34c644-a44f-47ce-ab95-66f419948535", 00:10:50.665 "is_configured": true, 00:10:50.665 "data_offset": 0, 00:10:50.665 "data_size": 65536 00:10:50.665 } 00:10:50.665 ] 00:10:50.665 } 00:10:50.665 } 00:10:50.665 }' 00:10:50.665 16:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:50.665 BaseBdev2 00:10:50.665 BaseBdev3' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.665 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.934 [2024-11-08 16:52:20.196908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.934 [2024-11-08 16:52:20.196982] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.934 [2024-11-08 16:52:20.197076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.934 [2024-11-08 16:52:20.197377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.934 [2024-11-08 16:52:20.197437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78454 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78454 ']' 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78454 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78454 00:10:50.934 killing process with pid 78454 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78454' 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78454 00:10:50.934 [2024-11-08 16:52:20.246661] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.934 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78454 00:10:50.934 [2024-11-08 16:52:20.277559] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:51.213 00:10:51.213 real 0m8.827s 00:10:51.213 user 0m15.100s 00:10:51.213 sys 0m1.769s 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.213 ************************************ 00:10:51.213 END TEST raid_state_function_test 00:10:51.213 ************************************ 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.213 16:52:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:51.213 16:52:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:51.213 16:52:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.213 16:52:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.213 ************************************ 00:10:51.213 START TEST raid_state_function_test_sb 00:10:51.213 ************************************ 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79053 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79053' 00:10:51.213 Process raid pid: 79053 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79053 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79053 ']' 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.213 16:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.213 [2024-11-08 16:52:20.688078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:51.213 [2024-11-08 16:52:20.688302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.473 [2024-11-08 16:52:20.851856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.473 [2024-11-08 16:52:20.902039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.473 [2024-11-08 16:52:20.944175] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.473 [2024-11-08 16:52:20.944291] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.042 [2024-11-08 16:52:21.537401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.042 [2024-11-08 16:52:21.537508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.042 [2024-11-08 16:52:21.537543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.042 [2024-11-08 16:52:21.537568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.042 [2024-11-08 16:52:21.537587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.042 [2024-11-08 16:52:21.537614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.042 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.302 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.302 "name": "Existed_Raid", 00:10:52.302 "uuid": "79f7f8d5-4367-4da3-8af0-9d715cb5b80a", 00:10:52.302 "strip_size_kb": 0, 00:10:52.302 "state": "configuring", 00:10:52.302 "raid_level": "raid1", 00:10:52.302 "superblock": true, 00:10:52.302 "num_base_bdevs": 3, 00:10:52.302 "num_base_bdevs_discovered": 0, 00:10:52.302 "num_base_bdevs_operational": 3, 00:10:52.302 "base_bdevs_list": [ 00:10:52.302 { 00:10:52.302 "name": "BaseBdev1", 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.302 "is_configured": false, 00:10:52.302 "data_offset": 0, 00:10:52.302 "data_size": 0 00:10:52.302 }, 00:10:52.302 { 00:10:52.302 "name": "BaseBdev2", 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.302 "is_configured": false, 00:10:52.302 "data_offset": 0, 00:10:52.302 "data_size": 0 00:10:52.302 }, 00:10:52.302 { 00:10:52.302 "name": "BaseBdev3", 00:10:52.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.302 "is_configured": false, 00:10:52.302 "data_offset": 0, 00:10:52.302 "data_size": 0 00:10:52.302 } 00:10:52.302 ] 00:10:52.302 }' 00:10:52.302 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.302 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.563 [2024-11-08 16:52:21.956581] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.563 [2024-11-08 16:52:21.956703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.563 [2024-11-08 16:52:21.964590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.563 [2024-11-08 16:52:21.964692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.563 [2024-11-08 16:52:21.964722] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.563 [2024-11-08 16:52:21.964745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.563 [2024-11-08 16:52:21.964764] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.563 [2024-11-08 16:52:21.964786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.563 [2024-11-08 16:52:21.981341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.563 BaseBdev1 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.563 16:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.563 [ 00:10:52.563 { 00:10:52.563 "name": "BaseBdev1", 00:10:52.563 "aliases": [ 00:10:52.563 "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3" 00:10:52.563 ], 00:10:52.563 "product_name": "Malloc disk", 00:10:52.563 "block_size": 512, 00:10:52.563 "num_blocks": 65536, 00:10:52.563 "uuid": "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3", 00:10:52.563 "assigned_rate_limits": { 00:10:52.563 "rw_ios_per_sec": 0, 00:10:52.563 "rw_mbytes_per_sec": 0, 00:10:52.563 "r_mbytes_per_sec": 0, 00:10:52.563 "w_mbytes_per_sec": 0 00:10:52.563 }, 00:10:52.563 "claimed": true, 00:10:52.563 "claim_type": "exclusive_write", 00:10:52.563 "zoned": false, 00:10:52.563 "supported_io_types": { 00:10:52.563 "read": true, 00:10:52.563 "write": true, 00:10:52.563 "unmap": true, 00:10:52.563 "flush": true, 00:10:52.563 "reset": true, 00:10:52.563 "nvme_admin": false, 00:10:52.563 "nvme_io": false, 00:10:52.563 "nvme_io_md": false, 00:10:52.563 "write_zeroes": true, 00:10:52.563 "zcopy": true, 00:10:52.563 "get_zone_info": false, 00:10:52.563 "zone_management": false, 00:10:52.563 "zone_append": false, 00:10:52.563 "compare": false, 00:10:52.563 "compare_and_write": false, 00:10:52.563 "abort": true, 00:10:52.563 "seek_hole": false, 00:10:52.563 "seek_data": false, 00:10:52.563 "copy": true, 00:10:52.563 "nvme_iov_md": false 00:10:52.563 }, 00:10:52.563 "memory_domains": [ 00:10:52.563 { 00:10:52.563 "dma_device_id": "system", 00:10:52.563 "dma_device_type": 1 00:10:52.564 }, 00:10:52.564 { 00:10:52.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.564 "dma_device_type": 2 00:10:52.564 } 00:10:52.564 ], 00:10:52.564 "driver_specific": {} 00:10:52.564 } 00:10:52.564 ] 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.564 "name": "Existed_Raid", 00:10:52.564 "uuid": "71ad768e-1679-4e78-9019-2eec5e4ad8b4", 00:10:52.564 "strip_size_kb": 0, 00:10:52.564 "state": "configuring", 00:10:52.564 "raid_level": "raid1", 00:10:52.564 "superblock": true, 00:10:52.564 "num_base_bdevs": 3, 00:10:52.564 "num_base_bdevs_discovered": 1, 00:10:52.564 "num_base_bdevs_operational": 3, 00:10:52.564 "base_bdevs_list": [ 00:10:52.564 { 00:10:52.564 "name": "BaseBdev1", 00:10:52.564 "uuid": "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3", 00:10:52.564 "is_configured": true, 00:10:52.564 "data_offset": 2048, 00:10:52.564 "data_size": 63488 00:10:52.564 }, 00:10:52.564 { 00:10:52.564 "name": "BaseBdev2", 00:10:52.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.564 "is_configured": false, 00:10:52.564 "data_offset": 0, 00:10:52.564 "data_size": 0 00:10:52.564 }, 00:10:52.564 { 00:10:52.564 "name": "BaseBdev3", 00:10:52.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.564 "is_configured": false, 00:10:52.564 "data_offset": 0, 00:10:52.564 "data_size": 0 00:10:52.564 } 00:10:52.564 ] 00:10:52.564 }' 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.564 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.133 [2024-11-08 16:52:22.488531] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.133 [2024-11-08 16:52:22.488589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.133 [2024-11-08 16:52:22.500560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.133 [2024-11-08 16:52:22.502480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.133 [2024-11-08 16:52:22.502525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.133 [2024-11-08 16:52:22.502534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.133 [2024-11-08 16:52:22.502544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.133 "name": "Existed_Raid", 00:10:53.133 "uuid": "dd49f82c-eb67-4a89-9fbd-1e855215be39", 00:10:53.133 "strip_size_kb": 0, 00:10:53.133 "state": "configuring", 00:10:53.133 "raid_level": "raid1", 00:10:53.133 "superblock": true, 00:10:53.133 "num_base_bdevs": 3, 00:10:53.133 "num_base_bdevs_discovered": 1, 00:10:53.133 "num_base_bdevs_operational": 3, 00:10:53.133 "base_bdevs_list": [ 00:10:53.133 { 00:10:53.133 "name": "BaseBdev1", 00:10:53.133 "uuid": "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3", 00:10:53.133 "is_configured": true, 00:10:53.133 "data_offset": 2048, 00:10:53.133 "data_size": 63488 00:10:53.133 }, 00:10:53.133 { 00:10:53.133 "name": "BaseBdev2", 00:10:53.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.133 "is_configured": false, 00:10:53.133 "data_offset": 0, 00:10:53.133 "data_size": 0 00:10:53.133 }, 00:10:53.133 { 00:10:53.133 "name": "BaseBdev3", 00:10:53.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.133 "is_configured": false, 00:10:53.133 "data_offset": 0, 00:10:53.133 "data_size": 0 00:10:53.133 } 00:10:53.133 ] 00:10:53.133 }' 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.133 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.703 [2024-11-08 16:52:22.981436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.703 BaseBdev2 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:53.703 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.704 16:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 [ 00:10:53.704 { 00:10:53.704 "name": "BaseBdev2", 00:10:53.704 "aliases": [ 00:10:53.704 "d5aa0c95-2d61-4942-8360-d13bb4b4db69" 00:10:53.704 ], 00:10:53.704 "product_name": "Malloc disk", 00:10:53.704 "block_size": 512, 00:10:53.704 "num_blocks": 65536, 00:10:53.704 "uuid": "d5aa0c95-2d61-4942-8360-d13bb4b4db69", 00:10:53.704 "assigned_rate_limits": { 00:10:53.704 "rw_ios_per_sec": 0, 00:10:53.704 "rw_mbytes_per_sec": 0, 00:10:53.704 "r_mbytes_per_sec": 0, 00:10:53.704 "w_mbytes_per_sec": 0 00:10:53.704 }, 00:10:53.704 "claimed": true, 00:10:53.704 "claim_type": "exclusive_write", 00:10:53.704 "zoned": false, 00:10:53.704 "supported_io_types": { 00:10:53.704 "read": true, 00:10:53.704 "write": true, 00:10:53.704 "unmap": true, 00:10:53.704 "flush": true, 00:10:53.704 "reset": true, 00:10:53.704 "nvme_admin": false, 00:10:53.704 "nvme_io": false, 00:10:53.704 "nvme_io_md": false, 00:10:53.704 "write_zeroes": true, 00:10:53.704 "zcopy": true, 00:10:53.704 "get_zone_info": false, 00:10:53.704 "zone_management": false, 00:10:53.704 "zone_append": false, 00:10:53.704 "compare": false, 00:10:53.704 "compare_and_write": false, 00:10:53.704 "abort": true, 00:10:53.704 "seek_hole": false, 00:10:53.704 "seek_data": false, 00:10:53.704 "copy": true, 00:10:53.704 "nvme_iov_md": false 00:10:53.704 }, 00:10:53.704 "memory_domains": [ 00:10:53.704 { 00:10:53.704 "dma_device_id": "system", 00:10:53.704 "dma_device_type": 1 00:10:53.704 }, 00:10:53.704 { 00:10:53.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.704 "dma_device_type": 2 00:10:53.704 } 00:10:53.704 ], 00:10:53.704 "driver_specific": {} 00:10:53.704 } 00:10:53.704 ] 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.704 "name": "Existed_Raid", 00:10:53.704 "uuid": "dd49f82c-eb67-4a89-9fbd-1e855215be39", 00:10:53.704 "strip_size_kb": 0, 00:10:53.704 "state": "configuring", 00:10:53.704 "raid_level": "raid1", 00:10:53.704 "superblock": true, 00:10:53.704 "num_base_bdevs": 3, 00:10:53.704 "num_base_bdevs_discovered": 2, 00:10:53.704 "num_base_bdevs_operational": 3, 00:10:53.704 "base_bdevs_list": [ 00:10:53.704 { 00:10:53.704 "name": "BaseBdev1", 00:10:53.704 "uuid": "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3", 00:10:53.704 "is_configured": true, 00:10:53.704 "data_offset": 2048, 00:10:53.704 "data_size": 63488 00:10:53.704 }, 00:10:53.704 { 00:10:53.704 "name": "BaseBdev2", 00:10:53.704 "uuid": "d5aa0c95-2d61-4942-8360-d13bb4b4db69", 00:10:53.704 "is_configured": true, 00:10:53.704 "data_offset": 2048, 00:10:53.704 "data_size": 63488 00:10:53.704 }, 00:10:53.704 { 00:10:53.704 "name": "BaseBdev3", 00:10:53.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.704 "is_configured": false, 00:10:53.704 "data_offset": 0, 00:10:53.704 "data_size": 0 00:10:53.704 } 00:10:53.704 ] 00:10:53.704 }' 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.704 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.965 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.965 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.965 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.965 [2024-11-08 16:52:23.487661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.965 [2024-11-08 16:52:23.487861] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:53.965 [2024-11-08 16:52:23.487878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.225 BaseBdev3 00:10:54.225 [2024-11-08 16:52:23.488171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:54.225 [2024-11-08 16:52:23.488348] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:54.225 [2024-11-08 16:52:23.488358] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:54.225 [2024-11-08 16:52:23.488473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.225 [ 00:10:54.225 { 00:10:54.225 "name": "BaseBdev3", 00:10:54.225 "aliases": [ 00:10:54.225 "d00684dc-d749-477e-b121-0ee481ef616b" 00:10:54.225 ], 00:10:54.225 "product_name": "Malloc disk", 00:10:54.225 "block_size": 512, 00:10:54.225 "num_blocks": 65536, 00:10:54.225 "uuid": "d00684dc-d749-477e-b121-0ee481ef616b", 00:10:54.225 "assigned_rate_limits": { 00:10:54.225 "rw_ios_per_sec": 0, 00:10:54.225 "rw_mbytes_per_sec": 0, 00:10:54.225 "r_mbytes_per_sec": 0, 00:10:54.225 "w_mbytes_per_sec": 0 00:10:54.225 }, 00:10:54.225 "claimed": true, 00:10:54.225 "claim_type": "exclusive_write", 00:10:54.225 "zoned": false, 00:10:54.225 "supported_io_types": { 00:10:54.225 "read": true, 00:10:54.225 "write": true, 00:10:54.225 "unmap": true, 00:10:54.225 "flush": true, 00:10:54.225 "reset": true, 00:10:54.225 "nvme_admin": false, 00:10:54.225 "nvme_io": false, 00:10:54.225 "nvme_io_md": false, 00:10:54.225 "write_zeroes": true, 00:10:54.225 "zcopy": true, 00:10:54.225 "get_zone_info": false, 00:10:54.225 "zone_management": false, 00:10:54.225 "zone_append": false, 00:10:54.225 "compare": false, 00:10:54.225 "compare_and_write": false, 00:10:54.225 "abort": true, 00:10:54.225 "seek_hole": false, 00:10:54.225 "seek_data": false, 00:10:54.225 "copy": true, 00:10:54.225 "nvme_iov_md": false 00:10:54.225 }, 00:10:54.225 "memory_domains": [ 00:10:54.225 { 00:10:54.225 "dma_device_id": "system", 00:10:54.225 "dma_device_type": 1 00:10:54.225 }, 00:10:54.225 { 00:10:54.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.225 "dma_device_type": 2 00:10:54.225 } 00:10:54.225 ], 00:10:54.225 "driver_specific": {} 00:10:54.225 } 00:10:54.225 ] 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.225 "name": "Existed_Raid", 00:10:54.225 "uuid": "dd49f82c-eb67-4a89-9fbd-1e855215be39", 00:10:54.225 "strip_size_kb": 0, 00:10:54.225 "state": "online", 00:10:54.225 "raid_level": "raid1", 00:10:54.225 "superblock": true, 00:10:54.225 "num_base_bdevs": 3, 00:10:54.225 "num_base_bdevs_discovered": 3, 00:10:54.225 "num_base_bdevs_operational": 3, 00:10:54.225 "base_bdevs_list": [ 00:10:54.225 { 00:10:54.225 "name": "BaseBdev1", 00:10:54.225 "uuid": "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3", 00:10:54.225 "is_configured": true, 00:10:54.225 "data_offset": 2048, 00:10:54.225 "data_size": 63488 00:10:54.225 }, 00:10:54.225 { 00:10:54.225 "name": "BaseBdev2", 00:10:54.225 "uuid": "d5aa0c95-2d61-4942-8360-d13bb4b4db69", 00:10:54.225 "is_configured": true, 00:10:54.225 "data_offset": 2048, 00:10:54.225 "data_size": 63488 00:10:54.225 }, 00:10:54.225 { 00:10:54.225 "name": "BaseBdev3", 00:10:54.225 "uuid": "d00684dc-d749-477e-b121-0ee481ef616b", 00:10:54.225 "is_configured": true, 00:10:54.225 "data_offset": 2048, 00:10:54.225 "data_size": 63488 00:10:54.225 } 00:10:54.225 ] 00:10:54.225 }' 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.225 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.485 [2024-11-08 16:52:23.971274] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.485 16:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.745 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.745 "name": "Existed_Raid", 00:10:54.745 "aliases": [ 00:10:54.746 "dd49f82c-eb67-4a89-9fbd-1e855215be39" 00:10:54.746 ], 00:10:54.746 "product_name": "Raid Volume", 00:10:54.746 "block_size": 512, 00:10:54.746 "num_blocks": 63488, 00:10:54.746 "uuid": "dd49f82c-eb67-4a89-9fbd-1e855215be39", 00:10:54.746 "assigned_rate_limits": { 00:10:54.746 "rw_ios_per_sec": 0, 00:10:54.746 "rw_mbytes_per_sec": 0, 00:10:54.746 "r_mbytes_per_sec": 0, 00:10:54.746 "w_mbytes_per_sec": 0 00:10:54.746 }, 00:10:54.746 "claimed": false, 00:10:54.746 "zoned": false, 00:10:54.746 "supported_io_types": { 00:10:54.746 "read": true, 00:10:54.746 "write": true, 00:10:54.746 "unmap": false, 00:10:54.746 "flush": false, 00:10:54.746 "reset": true, 00:10:54.746 "nvme_admin": false, 00:10:54.746 "nvme_io": false, 00:10:54.746 "nvme_io_md": false, 00:10:54.746 "write_zeroes": true, 00:10:54.746 "zcopy": false, 00:10:54.746 "get_zone_info": false, 00:10:54.746 "zone_management": false, 00:10:54.746 "zone_append": false, 00:10:54.746 "compare": false, 00:10:54.746 "compare_and_write": false, 00:10:54.746 "abort": false, 00:10:54.746 "seek_hole": false, 00:10:54.746 "seek_data": false, 00:10:54.746 "copy": false, 00:10:54.746 "nvme_iov_md": false 00:10:54.746 }, 00:10:54.746 "memory_domains": [ 00:10:54.746 { 00:10:54.746 "dma_device_id": "system", 00:10:54.746 "dma_device_type": 1 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.746 "dma_device_type": 2 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "dma_device_id": "system", 00:10:54.746 "dma_device_type": 1 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.746 "dma_device_type": 2 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "dma_device_id": "system", 00:10:54.746 "dma_device_type": 1 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.746 "dma_device_type": 2 00:10:54.746 } 00:10:54.746 ], 00:10:54.746 "driver_specific": { 00:10:54.746 "raid": { 00:10:54.746 "uuid": "dd49f82c-eb67-4a89-9fbd-1e855215be39", 00:10:54.746 "strip_size_kb": 0, 00:10:54.746 "state": "online", 00:10:54.746 "raid_level": "raid1", 00:10:54.746 "superblock": true, 00:10:54.746 "num_base_bdevs": 3, 00:10:54.746 "num_base_bdevs_discovered": 3, 00:10:54.746 "num_base_bdevs_operational": 3, 00:10:54.746 "base_bdevs_list": [ 00:10:54.746 { 00:10:54.746 "name": "BaseBdev1", 00:10:54.746 "uuid": "5351ce76-bb7c-4475-a685-7d4ea8fbf1c3", 00:10:54.746 "is_configured": true, 00:10:54.746 "data_offset": 2048, 00:10:54.746 "data_size": 63488 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "name": "BaseBdev2", 00:10:54.746 "uuid": "d5aa0c95-2d61-4942-8360-d13bb4b4db69", 00:10:54.746 "is_configured": true, 00:10:54.746 "data_offset": 2048, 00:10:54.746 "data_size": 63488 00:10:54.746 }, 00:10:54.746 { 00:10:54.746 "name": "BaseBdev3", 00:10:54.746 "uuid": "d00684dc-d749-477e-b121-0ee481ef616b", 00:10:54.746 "is_configured": true, 00:10:54.746 "data_offset": 2048, 00:10:54.746 "data_size": 63488 00:10:54.746 } 00:10:54.746 ] 00:10:54.746 } 00:10:54.746 } 00:10:54.746 }' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:54.746 BaseBdev2 00:10:54.746 BaseBdev3' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.746 [2024-11-08 16:52:24.246547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.746 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.006 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.006 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.006 "name": "Existed_Raid", 00:10:55.006 "uuid": "dd49f82c-eb67-4a89-9fbd-1e855215be39", 00:10:55.006 "strip_size_kb": 0, 00:10:55.006 "state": "online", 00:10:55.006 "raid_level": "raid1", 00:10:55.006 "superblock": true, 00:10:55.006 "num_base_bdevs": 3, 00:10:55.006 "num_base_bdevs_discovered": 2, 00:10:55.006 "num_base_bdevs_operational": 2, 00:10:55.006 "base_bdevs_list": [ 00:10:55.006 { 00:10:55.006 "name": null, 00:10:55.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.006 "is_configured": false, 00:10:55.006 "data_offset": 0, 00:10:55.006 "data_size": 63488 00:10:55.006 }, 00:10:55.006 { 00:10:55.006 "name": "BaseBdev2", 00:10:55.006 "uuid": "d5aa0c95-2d61-4942-8360-d13bb4b4db69", 00:10:55.006 "is_configured": true, 00:10:55.006 "data_offset": 2048, 00:10:55.006 "data_size": 63488 00:10:55.006 }, 00:10:55.006 { 00:10:55.006 "name": "BaseBdev3", 00:10:55.006 "uuid": "d00684dc-d749-477e-b121-0ee481ef616b", 00:10:55.006 "is_configured": true, 00:10:55.006 "data_offset": 2048, 00:10:55.006 "data_size": 63488 00:10:55.006 } 00:10:55.006 ] 00:10:55.006 }' 00:10:55.006 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.006 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.265 [2024-11-08 16:52:24.768870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.265 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 [2024-11-08 16:52:24.840068] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.526 [2024-11-08 16:52:24.840221] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.526 [2024-11-08 16:52:24.851977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.526 [2024-11-08 16:52:24.852119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.526 [2024-11-08 16:52:24.852145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 BaseBdev2 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 [ 00:10:55.526 { 00:10:55.526 "name": "BaseBdev2", 00:10:55.526 "aliases": [ 00:10:55.526 "75956538-63a1-476c-977e-7b0d5914e352" 00:10:55.526 ], 00:10:55.526 "product_name": "Malloc disk", 00:10:55.526 "block_size": 512, 00:10:55.526 "num_blocks": 65536, 00:10:55.526 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:55.526 "assigned_rate_limits": { 00:10:55.526 "rw_ios_per_sec": 0, 00:10:55.526 "rw_mbytes_per_sec": 0, 00:10:55.526 "r_mbytes_per_sec": 0, 00:10:55.526 "w_mbytes_per_sec": 0 00:10:55.526 }, 00:10:55.526 "claimed": false, 00:10:55.526 "zoned": false, 00:10:55.526 "supported_io_types": { 00:10:55.526 "read": true, 00:10:55.526 "write": true, 00:10:55.526 "unmap": true, 00:10:55.526 "flush": true, 00:10:55.526 "reset": true, 00:10:55.526 "nvme_admin": false, 00:10:55.526 "nvme_io": false, 00:10:55.526 "nvme_io_md": false, 00:10:55.526 "write_zeroes": true, 00:10:55.526 "zcopy": true, 00:10:55.526 "get_zone_info": false, 00:10:55.526 "zone_management": false, 00:10:55.526 "zone_append": false, 00:10:55.526 "compare": false, 00:10:55.526 "compare_and_write": false, 00:10:55.526 "abort": true, 00:10:55.526 "seek_hole": false, 00:10:55.526 "seek_data": false, 00:10:55.526 "copy": true, 00:10:55.526 "nvme_iov_md": false 00:10:55.526 }, 00:10:55.526 "memory_domains": [ 00:10:55.526 { 00:10:55.526 "dma_device_id": "system", 00:10:55.526 "dma_device_type": 1 00:10:55.526 }, 00:10:55.526 { 00:10:55.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.526 "dma_device_type": 2 00:10:55.526 } 00:10:55.526 ], 00:10:55.526 "driver_specific": {} 00:10:55.526 } 00:10:55.526 ] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 BaseBdev3 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.526 16:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.526 [ 00:10:55.526 { 00:10:55.526 "name": "BaseBdev3", 00:10:55.526 "aliases": [ 00:10:55.526 "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94" 00:10:55.526 ], 00:10:55.526 "product_name": "Malloc disk", 00:10:55.526 "block_size": 512, 00:10:55.526 "num_blocks": 65536, 00:10:55.526 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:55.526 "assigned_rate_limits": { 00:10:55.526 "rw_ios_per_sec": 0, 00:10:55.526 "rw_mbytes_per_sec": 0, 00:10:55.526 "r_mbytes_per_sec": 0, 00:10:55.526 "w_mbytes_per_sec": 0 00:10:55.526 }, 00:10:55.526 "claimed": false, 00:10:55.526 "zoned": false, 00:10:55.526 "supported_io_types": { 00:10:55.526 "read": true, 00:10:55.526 "write": true, 00:10:55.526 "unmap": true, 00:10:55.526 "flush": true, 00:10:55.526 "reset": true, 00:10:55.526 "nvme_admin": false, 00:10:55.526 "nvme_io": false, 00:10:55.526 "nvme_io_md": false, 00:10:55.526 "write_zeroes": true, 00:10:55.526 "zcopy": true, 00:10:55.526 "get_zone_info": false, 00:10:55.526 "zone_management": false, 00:10:55.526 "zone_append": false, 00:10:55.526 "compare": false, 00:10:55.526 "compare_and_write": false, 00:10:55.527 "abort": true, 00:10:55.527 "seek_hole": false, 00:10:55.527 "seek_data": false, 00:10:55.527 "copy": true, 00:10:55.527 "nvme_iov_md": false 00:10:55.527 }, 00:10:55.527 "memory_domains": [ 00:10:55.527 { 00:10:55.527 "dma_device_id": "system", 00:10:55.527 "dma_device_type": 1 00:10:55.527 }, 00:10:55.527 { 00:10:55.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.527 "dma_device_type": 2 00:10:55.527 } 00:10:55.527 ], 00:10:55.527 "driver_specific": {} 00:10:55.527 } 00:10:55.527 ] 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.527 [2024-11-08 16:52:25.020817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.527 [2024-11-08 16:52:25.020877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.527 [2024-11-08 16:52:25.020896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.527 [2024-11-08 16:52:25.022802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.527 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.787 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.787 "name": "Existed_Raid", 00:10:55.787 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:55.787 "strip_size_kb": 0, 00:10:55.787 "state": "configuring", 00:10:55.787 "raid_level": "raid1", 00:10:55.787 "superblock": true, 00:10:55.787 "num_base_bdevs": 3, 00:10:55.787 "num_base_bdevs_discovered": 2, 00:10:55.787 "num_base_bdevs_operational": 3, 00:10:55.787 "base_bdevs_list": [ 00:10:55.787 { 00:10:55.787 "name": "BaseBdev1", 00:10:55.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.787 "is_configured": false, 00:10:55.787 "data_offset": 0, 00:10:55.787 "data_size": 0 00:10:55.787 }, 00:10:55.787 { 00:10:55.787 "name": "BaseBdev2", 00:10:55.787 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:55.787 "is_configured": true, 00:10:55.787 "data_offset": 2048, 00:10:55.787 "data_size": 63488 00:10:55.787 }, 00:10:55.787 { 00:10:55.787 "name": "BaseBdev3", 00:10:55.787 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:55.787 "is_configured": true, 00:10:55.787 "data_offset": 2048, 00:10:55.787 "data_size": 63488 00:10:55.787 } 00:10:55.787 ] 00:10:55.787 }' 00:10:55.787 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.787 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.046 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:56.046 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.046 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.046 [2024-11-08 16:52:25.460074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.046 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.046 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:56.046 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.047 "name": "Existed_Raid", 00:10:56.047 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:56.047 "strip_size_kb": 0, 00:10:56.047 "state": "configuring", 00:10:56.047 "raid_level": "raid1", 00:10:56.047 "superblock": true, 00:10:56.047 "num_base_bdevs": 3, 00:10:56.047 "num_base_bdevs_discovered": 1, 00:10:56.047 "num_base_bdevs_operational": 3, 00:10:56.047 "base_bdevs_list": [ 00:10:56.047 { 00:10:56.047 "name": "BaseBdev1", 00:10:56.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.047 "is_configured": false, 00:10:56.047 "data_offset": 0, 00:10:56.047 "data_size": 0 00:10:56.047 }, 00:10:56.047 { 00:10:56.047 "name": null, 00:10:56.047 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:56.047 "is_configured": false, 00:10:56.047 "data_offset": 0, 00:10:56.047 "data_size": 63488 00:10:56.047 }, 00:10:56.047 { 00:10:56.047 "name": "BaseBdev3", 00:10:56.047 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:56.047 "is_configured": true, 00:10:56.047 "data_offset": 2048, 00:10:56.047 "data_size": 63488 00:10:56.047 } 00:10:56.047 ] 00:10:56.047 }' 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.047 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.616 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.617 [2024-11-08 16:52:25.962250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.617 BaseBdev1 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.617 [ 00:10:56.617 { 00:10:56.617 "name": "BaseBdev1", 00:10:56.617 "aliases": [ 00:10:56.617 "8eb6db81-6479-42b0-bf0d-8019c3e1c2be" 00:10:56.617 ], 00:10:56.617 "product_name": "Malloc disk", 00:10:56.617 "block_size": 512, 00:10:56.617 "num_blocks": 65536, 00:10:56.617 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:56.617 "assigned_rate_limits": { 00:10:56.617 "rw_ios_per_sec": 0, 00:10:56.617 "rw_mbytes_per_sec": 0, 00:10:56.617 "r_mbytes_per_sec": 0, 00:10:56.617 "w_mbytes_per_sec": 0 00:10:56.617 }, 00:10:56.617 "claimed": true, 00:10:56.617 "claim_type": "exclusive_write", 00:10:56.617 "zoned": false, 00:10:56.617 "supported_io_types": { 00:10:56.617 "read": true, 00:10:56.617 "write": true, 00:10:56.617 "unmap": true, 00:10:56.617 "flush": true, 00:10:56.617 "reset": true, 00:10:56.617 "nvme_admin": false, 00:10:56.617 "nvme_io": false, 00:10:56.617 "nvme_io_md": false, 00:10:56.617 "write_zeroes": true, 00:10:56.617 "zcopy": true, 00:10:56.617 "get_zone_info": false, 00:10:56.617 "zone_management": false, 00:10:56.617 "zone_append": false, 00:10:56.617 "compare": false, 00:10:56.617 "compare_and_write": false, 00:10:56.617 "abort": true, 00:10:56.617 "seek_hole": false, 00:10:56.617 "seek_data": false, 00:10:56.617 "copy": true, 00:10:56.617 "nvme_iov_md": false 00:10:56.617 }, 00:10:56.617 "memory_domains": [ 00:10:56.617 { 00:10:56.617 "dma_device_id": "system", 00:10:56.617 "dma_device_type": 1 00:10:56.617 }, 00:10:56.617 { 00:10:56.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.617 "dma_device_type": 2 00:10:56.617 } 00:10:56.617 ], 00:10:56.617 "driver_specific": {} 00:10:56.617 } 00:10:56.617 ] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.617 16:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.617 "name": "Existed_Raid", 00:10:56.617 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:56.617 "strip_size_kb": 0, 00:10:56.617 "state": "configuring", 00:10:56.617 "raid_level": "raid1", 00:10:56.617 "superblock": true, 00:10:56.617 "num_base_bdevs": 3, 00:10:56.617 "num_base_bdevs_discovered": 2, 00:10:56.617 "num_base_bdevs_operational": 3, 00:10:56.617 "base_bdevs_list": [ 00:10:56.617 { 00:10:56.617 "name": "BaseBdev1", 00:10:56.617 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:56.617 "is_configured": true, 00:10:56.617 "data_offset": 2048, 00:10:56.617 "data_size": 63488 00:10:56.617 }, 00:10:56.617 { 00:10:56.617 "name": null, 00:10:56.617 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:56.617 "is_configured": false, 00:10:56.617 "data_offset": 0, 00:10:56.617 "data_size": 63488 00:10:56.617 }, 00:10:56.617 { 00:10:56.617 "name": "BaseBdev3", 00:10:56.617 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:56.617 "is_configured": true, 00:10:56.617 "data_offset": 2048, 00:10:56.617 "data_size": 63488 00:10:56.617 } 00:10:56.617 ] 00:10:56.617 }' 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.617 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.877 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.877 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.877 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.877 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.878 [2024-11-08 16:52:26.365604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.878 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.177 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.177 "name": "Existed_Raid", 00:10:57.177 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:57.177 "strip_size_kb": 0, 00:10:57.177 "state": "configuring", 00:10:57.177 "raid_level": "raid1", 00:10:57.177 "superblock": true, 00:10:57.177 "num_base_bdevs": 3, 00:10:57.177 "num_base_bdevs_discovered": 1, 00:10:57.177 "num_base_bdevs_operational": 3, 00:10:57.177 "base_bdevs_list": [ 00:10:57.177 { 00:10:57.177 "name": "BaseBdev1", 00:10:57.177 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:57.177 "is_configured": true, 00:10:57.177 "data_offset": 2048, 00:10:57.177 "data_size": 63488 00:10:57.177 }, 00:10:57.177 { 00:10:57.177 "name": null, 00:10:57.177 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:57.177 "is_configured": false, 00:10:57.177 "data_offset": 0, 00:10:57.177 "data_size": 63488 00:10:57.177 }, 00:10:57.177 { 00:10:57.177 "name": null, 00:10:57.177 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:57.177 "is_configured": false, 00:10:57.177 "data_offset": 0, 00:10:57.177 "data_size": 63488 00:10:57.177 } 00:10:57.177 ] 00:10:57.177 }' 00:10:57.177 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.177 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.449 [2024-11-08 16:52:26.892783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.449 "name": "Existed_Raid", 00:10:57.449 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:57.449 "strip_size_kb": 0, 00:10:57.449 "state": "configuring", 00:10:57.449 "raid_level": "raid1", 00:10:57.449 "superblock": true, 00:10:57.449 "num_base_bdevs": 3, 00:10:57.449 "num_base_bdevs_discovered": 2, 00:10:57.449 "num_base_bdevs_operational": 3, 00:10:57.449 "base_bdevs_list": [ 00:10:57.449 { 00:10:57.449 "name": "BaseBdev1", 00:10:57.449 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:57.449 "is_configured": true, 00:10:57.449 "data_offset": 2048, 00:10:57.449 "data_size": 63488 00:10:57.449 }, 00:10:57.449 { 00:10:57.449 "name": null, 00:10:57.449 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:57.449 "is_configured": false, 00:10:57.449 "data_offset": 0, 00:10:57.449 "data_size": 63488 00:10:57.449 }, 00:10:57.449 { 00:10:57.449 "name": "BaseBdev3", 00:10:57.449 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:57.449 "is_configured": true, 00:10:57.449 "data_offset": 2048, 00:10:57.449 "data_size": 63488 00:10:57.449 } 00:10:57.449 ] 00:10:57.449 }' 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.449 16:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.019 [2024-11-08 16:52:27.344019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.019 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.019 "name": "Existed_Raid", 00:10:58.019 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:58.019 "strip_size_kb": 0, 00:10:58.019 "state": "configuring", 00:10:58.019 "raid_level": "raid1", 00:10:58.019 "superblock": true, 00:10:58.019 "num_base_bdevs": 3, 00:10:58.019 "num_base_bdevs_discovered": 1, 00:10:58.019 "num_base_bdevs_operational": 3, 00:10:58.019 "base_bdevs_list": [ 00:10:58.019 { 00:10:58.019 "name": null, 00:10:58.019 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:58.019 "is_configured": false, 00:10:58.019 "data_offset": 0, 00:10:58.019 "data_size": 63488 00:10:58.019 }, 00:10:58.019 { 00:10:58.019 "name": null, 00:10:58.019 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:58.019 "is_configured": false, 00:10:58.019 "data_offset": 0, 00:10:58.019 "data_size": 63488 00:10:58.019 }, 00:10:58.019 { 00:10:58.019 "name": "BaseBdev3", 00:10:58.019 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:58.019 "is_configured": true, 00:10:58.019 "data_offset": 2048, 00:10:58.019 "data_size": 63488 00:10:58.019 } 00:10:58.019 ] 00:10:58.019 }' 00:10:58.020 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.020 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.279 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.279 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.279 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.279 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.279 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.539 [2024-11-08 16:52:27.833521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.539 "name": "Existed_Raid", 00:10:58.539 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:58.539 "strip_size_kb": 0, 00:10:58.539 "state": "configuring", 00:10:58.539 "raid_level": "raid1", 00:10:58.539 "superblock": true, 00:10:58.539 "num_base_bdevs": 3, 00:10:58.539 "num_base_bdevs_discovered": 2, 00:10:58.539 "num_base_bdevs_operational": 3, 00:10:58.539 "base_bdevs_list": [ 00:10:58.539 { 00:10:58.539 "name": null, 00:10:58.539 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:58.539 "is_configured": false, 00:10:58.539 "data_offset": 0, 00:10:58.539 "data_size": 63488 00:10:58.539 }, 00:10:58.539 { 00:10:58.539 "name": "BaseBdev2", 00:10:58.539 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:58.539 "is_configured": true, 00:10:58.539 "data_offset": 2048, 00:10:58.539 "data_size": 63488 00:10:58.539 }, 00:10:58.539 { 00:10:58.539 "name": "BaseBdev3", 00:10:58.539 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:58.539 "is_configured": true, 00:10:58.539 "data_offset": 2048, 00:10:58.539 "data_size": 63488 00:10:58.539 } 00:10:58.539 ] 00:10:58.539 }' 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.539 16:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.799 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.058 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8eb6db81-6479-42b0-bf0d-8019c3e1c2be 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.059 [2024-11-08 16:52:28.355519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:59.059 [2024-11-08 16:52:28.355728] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:59.059 [2024-11-08 16:52:28.355741] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.059 NewBaseBdev 00:10:59.059 [2024-11-08 16:52:28.356008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:59.059 [2024-11-08 16:52:28.356152] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:59.059 [2024-11-08 16:52:28.356167] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:59.059 [2024-11-08 16:52:28.356266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.059 [ 00:10:59.059 { 00:10:59.059 "name": "NewBaseBdev", 00:10:59.059 "aliases": [ 00:10:59.059 "8eb6db81-6479-42b0-bf0d-8019c3e1c2be" 00:10:59.059 ], 00:10:59.059 "product_name": "Malloc disk", 00:10:59.059 "block_size": 512, 00:10:59.059 "num_blocks": 65536, 00:10:59.059 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:59.059 "assigned_rate_limits": { 00:10:59.059 "rw_ios_per_sec": 0, 00:10:59.059 "rw_mbytes_per_sec": 0, 00:10:59.059 "r_mbytes_per_sec": 0, 00:10:59.059 "w_mbytes_per_sec": 0 00:10:59.059 }, 00:10:59.059 "claimed": true, 00:10:59.059 "claim_type": "exclusive_write", 00:10:59.059 "zoned": false, 00:10:59.059 "supported_io_types": { 00:10:59.059 "read": true, 00:10:59.059 "write": true, 00:10:59.059 "unmap": true, 00:10:59.059 "flush": true, 00:10:59.059 "reset": true, 00:10:59.059 "nvme_admin": false, 00:10:59.059 "nvme_io": false, 00:10:59.059 "nvme_io_md": false, 00:10:59.059 "write_zeroes": true, 00:10:59.059 "zcopy": true, 00:10:59.059 "get_zone_info": false, 00:10:59.059 "zone_management": false, 00:10:59.059 "zone_append": false, 00:10:59.059 "compare": false, 00:10:59.059 "compare_and_write": false, 00:10:59.059 "abort": true, 00:10:59.059 "seek_hole": false, 00:10:59.059 "seek_data": false, 00:10:59.059 "copy": true, 00:10:59.059 "nvme_iov_md": false 00:10:59.059 }, 00:10:59.059 "memory_domains": [ 00:10:59.059 { 00:10:59.059 "dma_device_id": "system", 00:10:59.059 "dma_device_type": 1 00:10:59.059 }, 00:10:59.059 { 00:10:59.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.059 "dma_device_type": 2 00:10:59.059 } 00:10:59.059 ], 00:10:59.059 "driver_specific": {} 00:10:59.059 } 00:10:59.059 ] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.059 "name": "Existed_Raid", 00:10:59.059 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:59.059 "strip_size_kb": 0, 00:10:59.059 "state": "online", 00:10:59.059 "raid_level": "raid1", 00:10:59.059 "superblock": true, 00:10:59.059 "num_base_bdevs": 3, 00:10:59.059 "num_base_bdevs_discovered": 3, 00:10:59.059 "num_base_bdevs_operational": 3, 00:10:59.059 "base_bdevs_list": [ 00:10:59.059 { 00:10:59.059 "name": "NewBaseBdev", 00:10:59.059 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:59.059 "is_configured": true, 00:10:59.059 "data_offset": 2048, 00:10:59.059 "data_size": 63488 00:10:59.059 }, 00:10:59.059 { 00:10:59.059 "name": "BaseBdev2", 00:10:59.059 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:59.059 "is_configured": true, 00:10:59.059 "data_offset": 2048, 00:10:59.059 "data_size": 63488 00:10:59.059 }, 00:10:59.059 { 00:10:59.059 "name": "BaseBdev3", 00:10:59.059 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:59.059 "is_configured": true, 00:10:59.059 "data_offset": 2048, 00:10:59.059 "data_size": 63488 00:10:59.059 } 00:10:59.059 ] 00:10:59.059 }' 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.059 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.319 [2024-11-08 16:52:28.775246] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.319 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.319 "name": "Existed_Raid", 00:10:59.319 "aliases": [ 00:10:59.319 "a7a62871-d7d6-48b6-8779-9ee6f9324ca0" 00:10:59.319 ], 00:10:59.319 "product_name": "Raid Volume", 00:10:59.319 "block_size": 512, 00:10:59.319 "num_blocks": 63488, 00:10:59.319 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:59.319 "assigned_rate_limits": { 00:10:59.319 "rw_ios_per_sec": 0, 00:10:59.319 "rw_mbytes_per_sec": 0, 00:10:59.319 "r_mbytes_per_sec": 0, 00:10:59.319 "w_mbytes_per_sec": 0 00:10:59.319 }, 00:10:59.319 "claimed": false, 00:10:59.319 "zoned": false, 00:10:59.319 "supported_io_types": { 00:10:59.319 "read": true, 00:10:59.319 "write": true, 00:10:59.319 "unmap": false, 00:10:59.319 "flush": false, 00:10:59.319 "reset": true, 00:10:59.319 "nvme_admin": false, 00:10:59.319 "nvme_io": false, 00:10:59.319 "nvme_io_md": false, 00:10:59.319 "write_zeroes": true, 00:10:59.319 "zcopy": false, 00:10:59.319 "get_zone_info": false, 00:10:59.319 "zone_management": false, 00:10:59.319 "zone_append": false, 00:10:59.319 "compare": false, 00:10:59.319 "compare_and_write": false, 00:10:59.319 "abort": false, 00:10:59.320 "seek_hole": false, 00:10:59.320 "seek_data": false, 00:10:59.320 "copy": false, 00:10:59.320 "nvme_iov_md": false 00:10:59.320 }, 00:10:59.320 "memory_domains": [ 00:10:59.320 { 00:10:59.320 "dma_device_id": "system", 00:10:59.320 "dma_device_type": 1 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.320 "dma_device_type": 2 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "dma_device_id": "system", 00:10:59.320 "dma_device_type": 1 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.320 "dma_device_type": 2 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "dma_device_id": "system", 00:10:59.320 "dma_device_type": 1 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.320 "dma_device_type": 2 00:10:59.320 } 00:10:59.320 ], 00:10:59.320 "driver_specific": { 00:10:59.320 "raid": { 00:10:59.320 "uuid": "a7a62871-d7d6-48b6-8779-9ee6f9324ca0", 00:10:59.320 "strip_size_kb": 0, 00:10:59.320 "state": "online", 00:10:59.320 "raid_level": "raid1", 00:10:59.320 "superblock": true, 00:10:59.320 "num_base_bdevs": 3, 00:10:59.320 "num_base_bdevs_discovered": 3, 00:10:59.320 "num_base_bdevs_operational": 3, 00:10:59.320 "base_bdevs_list": [ 00:10:59.320 { 00:10:59.320 "name": "NewBaseBdev", 00:10:59.320 "uuid": "8eb6db81-6479-42b0-bf0d-8019c3e1c2be", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 2048, 00:10:59.320 "data_size": 63488 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "name": "BaseBdev2", 00:10:59.320 "uuid": "75956538-63a1-476c-977e-7b0d5914e352", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 2048, 00:10:59.320 "data_size": 63488 00:10:59.320 }, 00:10:59.320 { 00:10:59.320 "name": "BaseBdev3", 00:10:59.320 "uuid": "f3bb45f9-d74c-4cdb-ab57-d95fed0b9e94", 00:10:59.320 "is_configured": true, 00:10:59.320 "data_offset": 2048, 00:10:59.320 "data_size": 63488 00:10:59.320 } 00:10:59.320 ] 00:10:59.320 } 00:10:59.320 } 00:10:59.320 }' 00:10:59.320 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.580 BaseBdev2 00:10:59.580 BaseBdev3' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.580 16:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.580 [2024-11-08 16:52:29.038473] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.580 [2024-11-08 16:52:29.038506] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.580 [2024-11-08 16:52:29.038576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.580 [2024-11-08 16:52:29.038849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.580 [2024-11-08 16:52:29.038868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79053 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79053 ']' 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79053 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79053 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.580 killing process with pid 79053 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79053' 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79053 00:10:59.580 [2024-11-08 16:52:29.080799] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.580 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79053 00:10:59.840 [2024-11-08 16:52:29.112560] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.840 16:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.840 00:10:59.840 real 0m8.770s 00:10:59.840 user 0m14.908s 00:10:59.840 sys 0m1.806s 00:10:59.840 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.840 16:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.840 ************************************ 00:10:59.840 END TEST raid_state_function_test_sb 00:10:59.840 ************************************ 00:11:00.100 16:52:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:00.100 16:52:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:00.100 16:52:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.100 16:52:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 ************************************ 00:11:00.100 START TEST raid_superblock_test 00:11:00.100 ************************************ 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79662 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79662 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79662 ']' 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.100 16:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 [2024-11-08 16:52:29.514107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:00.100 [2024-11-08 16:52:29.514240] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79662 ] 00:11:00.359 [2024-11-08 16:52:29.657339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.359 [2024-11-08 16:52:29.701974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.359 [2024-11-08 16:52:29.744014] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.359 [2024-11-08 16:52:29.744059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 malloc1 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 [2024-11-08 16:52:30.366411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.928 [2024-11-08 16:52:30.366494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.928 [2024-11-08 16:52:30.366516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:00.928 [2024-11-08 16:52:30.366530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.928 [2024-11-08 16:52:30.368710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.928 [2024-11-08 16:52:30.368750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.928 pt1 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 malloc2 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 [2024-11-08 16:52:30.406299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.928 [2024-11-08 16:52:30.406373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.928 [2024-11-08 16:52:30.406396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:00.928 [2024-11-08 16:52:30.406411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.928 [2024-11-08 16:52:30.409295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.928 [2024-11-08 16:52:30.409346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.928 pt2 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 malloc3 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 [2024-11-08 16:52:30.434795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.928 [2024-11-08 16:52:30.434863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.928 [2024-11-08 16:52:30.434880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:00.928 [2024-11-08 16:52:30.434891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.928 [2024-11-08 16:52:30.436967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.928 [2024-11-08 16:52:30.437003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.928 pt3 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.928 [2024-11-08 16:52:30.446814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.928 [2024-11-08 16:52:30.448649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.928 [2024-11-08 16:52:30.448717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.928 [2024-11-08 16:52:30.448853] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:00.928 [2024-11-08 16:52:30.448864] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:00.928 [2024-11-08 16:52:30.449106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:00.928 [2024-11-08 16:52:30.449239] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:00.928 [2024-11-08 16:52:30.449257] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:00.928 [2024-11-08 16:52:30.449373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.928 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.929 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.188 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.188 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.188 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.189 "name": "raid_bdev1", 00:11:01.189 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:01.189 "strip_size_kb": 0, 00:11:01.189 "state": "online", 00:11:01.189 "raid_level": "raid1", 00:11:01.189 "superblock": true, 00:11:01.189 "num_base_bdevs": 3, 00:11:01.189 "num_base_bdevs_discovered": 3, 00:11:01.189 "num_base_bdevs_operational": 3, 00:11:01.189 "base_bdevs_list": [ 00:11:01.189 { 00:11:01.189 "name": "pt1", 00:11:01.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.189 "is_configured": true, 00:11:01.189 "data_offset": 2048, 00:11:01.189 "data_size": 63488 00:11:01.189 }, 00:11:01.189 { 00:11:01.189 "name": "pt2", 00:11:01.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.189 "is_configured": true, 00:11:01.189 "data_offset": 2048, 00:11:01.189 "data_size": 63488 00:11:01.189 }, 00:11:01.189 { 00:11:01.189 "name": "pt3", 00:11:01.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.189 "is_configured": true, 00:11:01.189 "data_offset": 2048, 00:11:01.189 "data_size": 63488 00:11:01.189 } 00:11:01.189 ] 00:11:01.189 }' 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.189 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.449 [2024-11-08 16:52:30.898368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.449 "name": "raid_bdev1", 00:11:01.449 "aliases": [ 00:11:01.449 "b1d71a9c-b110-44c7-8cbd-999c920c71ab" 00:11:01.449 ], 00:11:01.449 "product_name": "Raid Volume", 00:11:01.449 "block_size": 512, 00:11:01.449 "num_blocks": 63488, 00:11:01.449 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:01.449 "assigned_rate_limits": { 00:11:01.449 "rw_ios_per_sec": 0, 00:11:01.449 "rw_mbytes_per_sec": 0, 00:11:01.449 "r_mbytes_per_sec": 0, 00:11:01.449 "w_mbytes_per_sec": 0 00:11:01.449 }, 00:11:01.449 "claimed": false, 00:11:01.449 "zoned": false, 00:11:01.449 "supported_io_types": { 00:11:01.449 "read": true, 00:11:01.449 "write": true, 00:11:01.449 "unmap": false, 00:11:01.449 "flush": false, 00:11:01.449 "reset": true, 00:11:01.449 "nvme_admin": false, 00:11:01.449 "nvme_io": false, 00:11:01.449 "nvme_io_md": false, 00:11:01.449 "write_zeroes": true, 00:11:01.449 "zcopy": false, 00:11:01.449 "get_zone_info": false, 00:11:01.449 "zone_management": false, 00:11:01.449 "zone_append": false, 00:11:01.449 "compare": false, 00:11:01.449 "compare_and_write": false, 00:11:01.449 "abort": false, 00:11:01.449 "seek_hole": false, 00:11:01.449 "seek_data": false, 00:11:01.449 "copy": false, 00:11:01.449 "nvme_iov_md": false 00:11:01.449 }, 00:11:01.449 "memory_domains": [ 00:11:01.449 { 00:11:01.449 "dma_device_id": "system", 00:11:01.449 "dma_device_type": 1 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.449 "dma_device_type": 2 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "dma_device_id": "system", 00:11:01.449 "dma_device_type": 1 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.449 "dma_device_type": 2 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "dma_device_id": "system", 00:11:01.449 "dma_device_type": 1 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.449 "dma_device_type": 2 00:11:01.449 } 00:11:01.449 ], 00:11:01.449 "driver_specific": { 00:11:01.449 "raid": { 00:11:01.449 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:01.449 "strip_size_kb": 0, 00:11:01.449 "state": "online", 00:11:01.449 "raid_level": "raid1", 00:11:01.449 "superblock": true, 00:11:01.449 "num_base_bdevs": 3, 00:11:01.449 "num_base_bdevs_discovered": 3, 00:11:01.449 "num_base_bdevs_operational": 3, 00:11:01.449 "base_bdevs_list": [ 00:11:01.449 { 00:11:01.449 "name": "pt1", 00:11:01.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.449 "is_configured": true, 00:11:01.449 "data_offset": 2048, 00:11:01.449 "data_size": 63488 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "name": "pt2", 00:11:01.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.449 "is_configured": true, 00:11:01.449 "data_offset": 2048, 00:11:01.449 "data_size": 63488 00:11:01.449 }, 00:11:01.449 { 00:11:01.449 "name": "pt3", 00:11:01.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.449 "is_configured": true, 00:11:01.449 "data_offset": 2048, 00:11:01.449 "data_size": 63488 00:11:01.449 } 00:11:01.449 ] 00:11:01.449 } 00:11:01.449 } 00:11:01.449 }' 00:11:01.449 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.708 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.708 pt2 00:11:01.708 pt3' 00:11:01.708 16:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.708 [2024-11-08 16:52:31.169926] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b1d71a9c-b110-44c7-8cbd-999c920c71ab 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b1d71a9c-b110-44c7-8cbd-999c920c71ab ']' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.708 [2024-11-08 16:52:31.217528] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.708 [2024-11-08 16:52:31.217559] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.708 [2024-11-08 16:52:31.217647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.708 [2024-11-08 16:52:31.217720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.708 [2024-11-08 16:52:31.217732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.708 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.968 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 [2024-11-08 16:52:31.349288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:01.969 [2024-11-08 16:52:31.351216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:01.969 [2024-11-08 16:52:31.351268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:01.969 [2024-11-08 16:52:31.351318] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:01.969 [2024-11-08 16:52:31.351370] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:01.969 [2024-11-08 16:52:31.351390] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:01.969 [2024-11-08 16:52:31.351403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.969 [2024-11-08 16:52:31.351414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:01.969 request: 00:11:01.969 { 00:11:01.969 "name": "raid_bdev1", 00:11:01.969 "raid_level": "raid1", 00:11:01.969 "base_bdevs": [ 00:11:01.969 "malloc1", 00:11:01.969 "malloc2", 00:11:01.969 "malloc3" 00:11:01.969 ], 00:11:01.969 "superblock": false, 00:11:01.969 "method": "bdev_raid_create", 00:11:01.969 "req_id": 1 00:11:01.969 } 00:11:01.969 Got JSON-RPC error response 00:11:01.969 response: 00:11:01.969 { 00:11:01.969 "code": -17, 00:11:01.969 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:01.969 } 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 [2024-11-08 16:52:31.413162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.969 [2024-11-08 16:52:31.413227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.969 [2024-11-08 16:52:31.413248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:01.969 [2024-11-08 16:52:31.413259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.969 [2024-11-08 16:52:31.415500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.969 [2024-11-08 16:52:31.415542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.969 [2024-11-08 16:52:31.415618] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:01.969 [2024-11-08 16:52:31.415690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.969 pt1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.969 "name": "raid_bdev1", 00:11:01.969 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:01.969 "strip_size_kb": 0, 00:11:01.969 "state": "configuring", 00:11:01.969 "raid_level": "raid1", 00:11:01.969 "superblock": true, 00:11:01.969 "num_base_bdevs": 3, 00:11:01.969 "num_base_bdevs_discovered": 1, 00:11:01.969 "num_base_bdevs_operational": 3, 00:11:01.969 "base_bdevs_list": [ 00:11:01.969 { 00:11:01.969 "name": "pt1", 00:11:01.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.969 "is_configured": true, 00:11:01.969 "data_offset": 2048, 00:11:01.969 "data_size": 63488 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "name": null, 00:11:01.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.969 "is_configured": false, 00:11:01.969 "data_offset": 2048, 00:11:01.969 "data_size": 63488 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "name": null, 00:11:01.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.969 "is_configured": false, 00:11:01.969 "data_offset": 2048, 00:11:01.969 "data_size": 63488 00:11:01.969 } 00:11:01.969 ] 00:11:01.969 }' 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.969 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.541 [2024-11-08 16:52:31.848459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.541 [2024-11-08 16:52:31.848544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.541 [2024-11-08 16:52:31.848566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:02.541 [2024-11-08 16:52:31.848579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.541 [2024-11-08 16:52:31.848999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.541 [2024-11-08 16:52:31.849032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.541 [2024-11-08 16:52:31.849110] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:02.541 [2024-11-08 16:52:31.849140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.541 pt2 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.541 [2024-11-08 16:52:31.856439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.541 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.542 "name": "raid_bdev1", 00:11:02.542 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:02.542 "strip_size_kb": 0, 00:11:02.542 "state": "configuring", 00:11:02.542 "raid_level": "raid1", 00:11:02.542 "superblock": true, 00:11:02.542 "num_base_bdevs": 3, 00:11:02.542 "num_base_bdevs_discovered": 1, 00:11:02.542 "num_base_bdevs_operational": 3, 00:11:02.542 "base_bdevs_list": [ 00:11:02.542 { 00:11:02.542 "name": "pt1", 00:11:02.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.542 "is_configured": true, 00:11:02.542 "data_offset": 2048, 00:11:02.542 "data_size": 63488 00:11:02.542 }, 00:11:02.542 { 00:11:02.542 "name": null, 00:11:02.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.542 "is_configured": false, 00:11:02.542 "data_offset": 0, 00:11:02.542 "data_size": 63488 00:11:02.542 }, 00:11:02.542 { 00:11:02.542 "name": null, 00:11:02.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.542 "is_configured": false, 00:11:02.542 "data_offset": 2048, 00:11:02.542 "data_size": 63488 00:11:02.542 } 00:11:02.542 ] 00:11:02.542 }' 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.542 16:52:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 [2024-11-08 16:52:32.311738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.804 [2024-11-08 16:52:32.311808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.804 [2024-11-08 16:52:32.311831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:02.804 [2024-11-08 16:52:32.311842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.804 [2024-11-08 16:52:32.312272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.804 [2024-11-08 16:52:32.312304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.804 [2024-11-08 16:52:32.312392] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:02.804 [2024-11-08 16:52:32.312424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.804 pt2 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 [2024-11-08 16:52:32.319667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.804 [2024-11-08 16:52:32.319732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.804 [2024-11-08 16:52:32.319752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:02.804 [2024-11-08 16:52:32.319761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.804 [2024-11-08 16:52:32.320128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.804 [2024-11-08 16:52:32.320158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.804 [2024-11-08 16:52:32.320228] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:02.804 [2024-11-08 16:52:32.320247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.804 [2024-11-08 16:52:32.320348] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:02.804 [2024-11-08 16:52:32.320362] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:02.804 [2024-11-08 16:52:32.320601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:02.804 [2024-11-08 16:52:32.320766] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:02.804 [2024-11-08 16:52:32.320785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:02.804 [2024-11-08 16:52:32.320894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.804 pt3 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.804 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.064 "name": "raid_bdev1", 00:11:03.064 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:03.064 "strip_size_kb": 0, 00:11:03.064 "state": "online", 00:11:03.064 "raid_level": "raid1", 00:11:03.064 "superblock": true, 00:11:03.064 "num_base_bdevs": 3, 00:11:03.064 "num_base_bdevs_discovered": 3, 00:11:03.064 "num_base_bdevs_operational": 3, 00:11:03.064 "base_bdevs_list": [ 00:11:03.064 { 00:11:03.064 "name": "pt1", 00:11:03.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.064 "is_configured": true, 00:11:03.064 "data_offset": 2048, 00:11:03.064 "data_size": 63488 00:11:03.064 }, 00:11:03.064 { 00:11:03.064 "name": "pt2", 00:11:03.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.064 "is_configured": true, 00:11:03.064 "data_offset": 2048, 00:11:03.064 "data_size": 63488 00:11:03.064 }, 00:11:03.064 { 00:11:03.064 "name": "pt3", 00:11:03.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.064 "is_configured": true, 00:11:03.064 "data_offset": 2048, 00:11:03.064 "data_size": 63488 00:11:03.064 } 00:11:03.064 ] 00:11:03.064 }' 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.064 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.324 [2024-11-08 16:52:32.731353] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.324 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.324 "name": "raid_bdev1", 00:11:03.324 "aliases": [ 00:11:03.324 "b1d71a9c-b110-44c7-8cbd-999c920c71ab" 00:11:03.324 ], 00:11:03.324 "product_name": "Raid Volume", 00:11:03.324 "block_size": 512, 00:11:03.324 "num_blocks": 63488, 00:11:03.324 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:03.324 "assigned_rate_limits": { 00:11:03.324 "rw_ios_per_sec": 0, 00:11:03.324 "rw_mbytes_per_sec": 0, 00:11:03.324 "r_mbytes_per_sec": 0, 00:11:03.324 "w_mbytes_per_sec": 0 00:11:03.324 }, 00:11:03.324 "claimed": false, 00:11:03.324 "zoned": false, 00:11:03.324 "supported_io_types": { 00:11:03.324 "read": true, 00:11:03.324 "write": true, 00:11:03.324 "unmap": false, 00:11:03.324 "flush": false, 00:11:03.324 "reset": true, 00:11:03.324 "nvme_admin": false, 00:11:03.324 "nvme_io": false, 00:11:03.324 "nvme_io_md": false, 00:11:03.324 "write_zeroes": true, 00:11:03.324 "zcopy": false, 00:11:03.324 "get_zone_info": false, 00:11:03.324 "zone_management": false, 00:11:03.324 "zone_append": false, 00:11:03.324 "compare": false, 00:11:03.324 "compare_and_write": false, 00:11:03.324 "abort": false, 00:11:03.324 "seek_hole": false, 00:11:03.324 "seek_data": false, 00:11:03.324 "copy": false, 00:11:03.324 "nvme_iov_md": false 00:11:03.324 }, 00:11:03.324 "memory_domains": [ 00:11:03.324 { 00:11:03.324 "dma_device_id": "system", 00:11:03.324 "dma_device_type": 1 00:11:03.324 }, 00:11:03.324 { 00:11:03.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.325 "dma_device_type": 2 00:11:03.325 }, 00:11:03.325 { 00:11:03.325 "dma_device_id": "system", 00:11:03.325 "dma_device_type": 1 00:11:03.325 }, 00:11:03.325 { 00:11:03.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.325 "dma_device_type": 2 00:11:03.325 }, 00:11:03.325 { 00:11:03.325 "dma_device_id": "system", 00:11:03.325 "dma_device_type": 1 00:11:03.325 }, 00:11:03.325 { 00:11:03.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.325 "dma_device_type": 2 00:11:03.325 } 00:11:03.325 ], 00:11:03.325 "driver_specific": { 00:11:03.325 "raid": { 00:11:03.325 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:03.325 "strip_size_kb": 0, 00:11:03.325 "state": "online", 00:11:03.325 "raid_level": "raid1", 00:11:03.325 "superblock": true, 00:11:03.325 "num_base_bdevs": 3, 00:11:03.325 "num_base_bdevs_discovered": 3, 00:11:03.325 "num_base_bdevs_operational": 3, 00:11:03.325 "base_bdevs_list": [ 00:11:03.325 { 00:11:03.325 "name": "pt1", 00:11:03.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.325 "is_configured": true, 00:11:03.325 "data_offset": 2048, 00:11:03.325 "data_size": 63488 00:11:03.325 }, 00:11:03.325 { 00:11:03.325 "name": "pt2", 00:11:03.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.325 "is_configured": true, 00:11:03.325 "data_offset": 2048, 00:11:03.325 "data_size": 63488 00:11:03.325 }, 00:11:03.325 { 00:11:03.325 "name": "pt3", 00:11:03.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.325 "is_configured": true, 00:11:03.325 "data_offset": 2048, 00:11:03.325 "data_size": 63488 00:11:03.325 } 00:11:03.325 ] 00:11:03.325 } 00:11:03.325 } 00:11:03.325 }' 00:11:03.325 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.325 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.325 pt2 00:11:03.325 pt3' 00:11:03.325 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 16:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 [2024-11-08 16:52:33.002859] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b1d71a9c-b110-44c7-8cbd-999c920c71ab '!=' b1d71a9c-b110-44c7-8cbd-999c920c71ab ']' 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 [2024-11-08 16:52:33.030542] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.585 "name": "raid_bdev1", 00:11:03.585 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:03.585 "strip_size_kb": 0, 00:11:03.585 "state": "online", 00:11:03.585 "raid_level": "raid1", 00:11:03.585 "superblock": true, 00:11:03.585 "num_base_bdevs": 3, 00:11:03.585 "num_base_bdevs_discovered": 2, 00:11:03.585 "num_base_bdevs_operational": 2, 00:11:03.585 "base_bdevs_list": [ 00:11:03.585 { 00:11:03.585 "name": null, 00:11:03.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.585 "is_configured": false, 00:11:03.585 "data_offset": 0, 00:11:03.585 "data_size": 63488 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "name": "pt2", 00:11:03.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.585 "is_configured": true, 00:11:03.585 "data_offset": 2048, 00:11:03.585 "data_size": 63488 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "name": "pt3", 00:11:03.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.585 "is_configured": true, 00:11:03.585 "data_offset": 2048, 00:11:03.585 "data_size": 63488 00:11:03.585 } 00:11:03.585 ] 00:11:03.585 }' 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.585 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.155 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.155 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.155 [2024-11-08 16:52:33.413860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.155 [2024-11-08 16:52:33.413901] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.155 [2024-11-08 16:52:33.413985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.155 [2024-11-08 16:52:33.414063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.155 [2024-11-08 16:52:33.414080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.156 [2024-11-08 16:52:33.489710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.156 [2024-11-08 16:52:33.489778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.156 [2024-11-08 16:52:33.489797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:04.156 [2024-11-08 16:52:33.489806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.156 [2024-11-08 16:52:33.491992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.156 [2024-11-08 16:52:33.492032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.156 [2024-11-08 16:52:33.492105] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.156 [2024-11-08 16:52:33.492136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.156 pt2 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.156 "name": "raid_bdev1", 00:11:04.156 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:04.156 "strip_size_kb": 0, 00:11:04.156 "state": "configuring", 00:11:04.156 "raid_level": "raid1", 00:11:04.156 "superblock": true, 00:11:04.156 "num_base_bdevs": 3, 00:11:04.156 "num_base_bdevs_discovered": 1, 00:11:04.156 "num_base_bdevs_operational": 2, 00:11:04.156 "base_bdevs_list": [ 00:11:04.156 { 00:11:04.156 "name": null, 00:11:04.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.156 "is_configured": false, 00:11:04.156 "data_offset": 2048, 00:11:04.156 "data_size": 63488 00:11:04.156 }, 00:11:04.156 { 00:11:04.156 "name": "pt2", 00:11:04.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.156 "is_configured": true, 00:11:04.156 "data_offset": 2048, 00:11:04.156 "data_size": 63488 00:11:04.156 }, 00:11:04.156 { 00:11:04.156 "name": null, 00:11:04.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.156 "is_configured": false, 00:11:04.156 "data_offset": 2048, 00:11:04.156 "data_size": 63488 00:11:04.156 } 00:11:04.156 ] 00:11:04.156 }' 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.156 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:04.415 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:04.415 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.416 [2024-11-08 16:52:33.885076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.416 [2024-11-08 16:52:33.885147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.416 [2024-11-08 16:52:33.885172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:04.416 [2024-11-08 16:52:33.885183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.416 [2024-11-08 16:52:33.885614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.416 [2024-11-08 16:52:33.885653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.416 [2024-11-08 16:52:33.885740] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:04.416 [2024-11-08 16:52:33.885769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.416 [2024-11-08 16:52:33.885870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:04.416 [2024-11-08 16:52:33.885883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.416 [2024-11-08 16:52:33.886159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:04.416 [2024-11-08 16:52:33.886297] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:04.416 [2024-11-08 16:52:33.886312] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:04.416 [2024-11-08 16:52:33.886425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.416 pt3 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.416 "name": "raid_bdev1", 00:11:04.416 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:04.416 "strip_size_kb": 0, 00:11:04.416 "state": "online", 00:11:04.416 "raid_level": "raid1", 00:11:04.416 "superblock": true, 00:11:04.416 "num_base_bdevs": 3, 00:11:04.416 "num_base_bdevs_discovered": 2, 00:11:04.416 "num_base_bdevs_operational": 2, 00:11:04.416 "base_bdevs_list": [ 00:11:04.416 { 00:11:04.416 "name": null, 00:11:04.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.416 "is_configured": false, 00:11:04.416 "data_offset": 2048, 00:11:04.416 "data_size": 63488 00:11:04.416 }, 00:11:04.416 { 00:11:04.416 "name": "pt2", 00:11:04.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.416 "is_configured": true, 00:11:04.416 "data_offset": 2048, 00:11:04.416 "data_size": 63488 00:11:04.416 }, 00:11:04.416 { 00:11:04.416 "name": "pt3", 00:11:04.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.416 "is_configured": true, 00:11:04.416 "data_offset": 2048, 00:11:04.416 "data_size": 63488 00:11:04.416 } 00:11:04.416 ] 00:11:04.416 }' 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.416 16:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 [2024-11-08 16:52:34.296370] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.987 [2024-11-08 16:52:34.296404] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.987 [2024-11-08 16:52:34.296489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.987 [2024-11-08 16:52:34.296547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.987 [2024-11-08 16:52:34.296565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 [2024-11-08 16:52:34.368217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.987 [2024-11-08 16:52:34.368286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.987 [2024-11-08 16:52:34.368304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:04.987 [2024-11-08 16:52:34.368315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.987 [2024-11-08 16:52:34.370535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.987 [2024-11-08 16:52:34.370576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.987 [2024-11-08 16:52:34.370660] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.987 [2024-11-08 16:52:34.370703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.987 [2024-11-08 16:52:34.370807] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:04.987 [2024-11-08 16:52:34.370821] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.987 [2024-11-08 16:52:34.370836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:11:04.987 [2024-11-08 16:52:34.370867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.987 pt1 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.987 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.988 "name": "raid_bdev1", 00:11:04.988 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:04.988 "strip_size_kb": 0, 00:11:04.988 "state": "configuring", 00:11:04.988 "raid_level": "raid1", 00:11:04.988 "superblock": true, 00:11:04.988 "num_base_bdevs": 3, 00:11:04.988 "num_base_bdevs_discovered": 1, 00:11:04.988 "num_base_bdevs_operational": 2, 00:11:04.988 "base_bdevs_list": [ 00:11:04.988 { 00:11:04.988 "name": null, 00:11:04.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.988 "is_configured": false, 00:11:04.988 "data_offset": 2048, 00:11:04.988 "data_size": 63488 00:11:04.988 }, 00:11:04.988 { 00:11:04.988 "name": "pt2", 00:11:04.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.988 "is_configured": true, 00:11:04.988 "data_offset": 2048, 00:11:04.988 "data_size": 63488 00:11:04.988 }, 00:11:04.988 { 00:11:04.988 "name": null, 00:11:04.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.988 "is_configured": false, 00:11:04.988 "data_offset": 2048, 00:11:04.988 "data_size": 63488 00:11:04.988 } 00:11:04.988 ] 00:11:04.988 }' 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.988 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.557 [2024-11-08 16:52:34.843389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.557 [2024-11-08 16:52:34.843458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.557 [2024-11-08 16:52:34.843479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:05.557 [2024-11-08 16:52:34.843490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.557 [2024-11-08 16:52:34.843910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.557 [2024-11-08 16:52:34.843940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.557 [2024-11-08 16:52:34.844020] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:05.557 [2024-11-08 16:52:34.844070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.557 [2024-11-08 16:52:34.844170] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:05.557 [2024-11-08 16:52:34.844186] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:05.557 [2024-11-08 16:52:34.844407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:05.557 [2024-11-08 16:52:34.844545] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:05.557 [2024-11-08 16:52:34.844567] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:05.557 [2024-11-08 16:52:34.844691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.557 pt3 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.557 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.557 "name": "raid_bdev1", 00:11:05.557 "uuid": "b1d71a9c-b110-44c7-8cbd-999c920c71ab", 00:11:05.557 "strip_size_kb": 0, 00:11:05.557 "state": "online", 00:11:05.557 "raid_level": "raid1", 00:11:05.557 "superblock": true, 00:11:05.557 "num_base_bdevs": 3, 00:11:05.557 "num_base_bdevs_discovered": 2, 00:11:05.557 "num_base_bdevs_operational": 2, 00:11:05.557 "base_bdevs_list": [ 00:11:05.557 { 00:11:05.557 "name": null, 00:11:05.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.557 "is_configured": false, 00:11:05.557 "data_offset": 2048, 00:11:05.557 "data_size": 63488 00:11:05.557 }, 00:11:05.557 { 00:11:05.557 "name": "pt2", 00:11:05.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.557 "is_configured": true, 00:11:05.557 "data_offset": 2048, 00:11:05.557 "data_size": 63488 00:11:05.557 }, 00:11:05.557 { 00:11:05.557 "name": "pt3", 00:11:05.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.558 "is_configured": true, 00:11:05.558 "data_offset": 2048, 00:11:05.558 "data_size": 63488 00:11:05.558 } 00:11:05.558 ] 00:11:05.558 }' 00:11:05.558 16:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.558 16:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.829 [2024-11-08 16:52:35.318937] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.829 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b1d71a9c-b110-44c7-8cbd-999c920c71ab '!=' b1d71a9c-b110-44c7-8cbd-999c920c71ab ']' 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79662 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79662 ']' 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79662 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79662 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.109 killing process with pid 79662 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79662' 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79662 00:11:06.109 [2024-11-08 16:52:35.399901] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.109 [2024-11-08 16:52:35.400002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.109 [2024-11-08 16:52:35.400077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.109 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79662 00:11:06.109 [2024-11-08 16:52:35.400091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:06.109 [2024-11-08 16:52:35.434037] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.369 16:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:06.369 00:11:06.369 real 0m6.252s 00:11:06.369 user 0m10.459s 00:11:06.369 sys 0m1.325s 00:11:06.369 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.369 16:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.369 ************************************ 00:11:06.369 END TEST raid_superblock_test 00:11:06.369 ************************************ 00:11:06.369 16:52:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:06.369 16:52:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:06.369 16:52:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.369 16:52:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.369 ************************************ 00:11:06.369 START TEST raid_read_error_test 00:11:06.369 ************************************ 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4yEni2i29m 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80091 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80091 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80091 ']' 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.369 16:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.369 [2024-11-08 16:52:35.855235] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:06.369 [2024-11-08 16:52:35.855408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80091 ] 00:11:06.629 [2024-11-08 16:52:36.016731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.629 [2024-11-08 16:52:36.068491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.629 [2024-11-08 16:52:36.110784] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.629 [2024-11-08 16:52:36.110833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 BaseBdev1_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 true 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 [2024-11-08 16:52:36.768989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:07.570 [2024-11-08 16:52:36.769047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.570 [2024-11-08 16:52:36.769068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:07.570 [2024-11-08 16:52:36.769077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.570 [2024-11-08 16:52:36.771331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.570 [2024-11-08 16:52:36.771379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.570 BaseBdev1 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 BaseBdev2_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 true 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 [2024-11-08 16:52:36.820303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:07.570 [2024-11-08 16:52:36.820377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.570 [2024-11-08 16:52:36.820398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:07.570 [2024-11-08 16:52:36.820407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.570 [2024-11-08 16:52:36.822423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.570 [2024-11-08 16:52:36.822458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:07.570 BaseBdev2 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 BaseBdev3_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 true 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.570 [2024-11-08 16:52:36.860945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:07.570 [2024-11-08 16:52:36.860996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.570 [2024-11-08 16:52:36.861014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:07.570 [2024-11-08 16:52:36.861022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.570 [2024-11-08 16:52:36.863132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.570 [2024-11-08 16:52:36.863174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:07.570 BaseBdev3 00:11:07.570 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.571 [2024-11-08 16:52:36.872984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.571 [2024-11-08 16:52:36.874840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.571 [2024-11-08 16:52:36.874925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.571 [2024-11-08 16:52:36.875111] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:07.571 [2024-11-08 16:52:36.875129] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.571 [2024-11-08 16:52:36.875396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:07.571 [2024-11-08 16:52:36.875551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:07.571 [2024-11-08 16:52:36.875569] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:07.571 [2024-11-08 16:52:36.875737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.571 "name": "raid_bdev1", 00:11:07.571 "uuid": "1d484a8e-d0fe-4dec-b73c-6e44c956ad54", 00:11:07.571 "strip_size_kb": 0, 00:11:07.571 "state": "online", 00:11:07.571 "raid_level": "raid1", 00:11:07.571 "superblock": true, 00:11:07.571 "num_base_bdevs": 3, 00:11:07.571 "num_base_bdevs_discovered": 3, 00:11:07.571 "num_base_bdevs_operational": 3, 00:11:07.571 "base_bdevs_list": [ 00:11:07.571 { 00:11:07.571 "name": "BaseBdev1", 00:11:07.571 "uuid": "e487bf36-a6f0-53ca-a12f-9c247652d35d", 00:11:07.571 "is_configured": true, 00:11:07.571 "data_offset": 2048, 00:11:07.571 "data_size": 63488 00:11:07.571 }, 00:11:07.571 { 00:11:07.571 "name": "BaseBdev2", 00:11:07.571 "uuid": "5cd4bfc3-ef96-590c-bbf1-17231483b1cc", 00:11:07.571 "is_configured": true, 00:11:07.571 "data_offset": 2048, 00:11:07.571 "data_size": 63488 00:11:07.571 }, 00:11:07.571 { 00:11:07.571 "name": "BaseBdev3", 00:11:07.571 "uuid": "a76018a1-426d-50e1-a81f-7de6a08d98e7", 00:11:07.571 "is_configured": true, 00:11:07.571 "data_offset": 2048, 00:11:07.571 "data_size": 63488 00:11:07.571 } 00:11:07.571 ] 00:11:07.571 }' 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.571 16:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.831 16:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:07.831 16:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:08.089 [2024-11-08 16:52:37.360546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.030 "name": "raid_bdev1", 00:11:09.030 "uuid": "1d484a8e-d0fe-4dec-b73c-6e44c956ad54", 00:11:09.030 "strip_size_kb": 0, 00:11:09.030 "state": "online", 00:11:09.030 "raid_level": "raid1", 00:11:09.030 "superblock": true, 00:11:09.030 "num_base_bdevs": 3, 00:11:09.030 "num_base_bdevs_discovered": 3, 00:11:09.030 "num_base_bdevs_operational": 3, 00:11:09.030 "base_bdevs_list": [ 00:11:09.030 { 00:11:09.030 "name": "BaseBdev1", 00:11:09.030 "uuid": "e487bf36-a6f0-53ca-a12f-9c247652d35d", 00:11:09.030 "is_configured": true, 00:11:09.030 "data_offset": 2048, 00:11:09.030 "data_size": 63488 00:11:09.030 }, 00:11:09.030 { 00:11:09.030 "name": "BaseBdev2", 00:11:09.030 "uuid": "5cd4bfc3-ef96-590c-bbf1-17231483b1cc", 00:11:09.030 "is_configured": true, 00:11:09.030 "data_offset": 2048, 00:11:09.030 "data_size": 63488 00:11:09.030 }, 00:11:09.030 { 00:11:09.030 "name": "BaseBdev3", 00:11:09.030 "uuid": "a76018a1-426d-50e1-a81f-7de6a08d98e7", 00:11:09.030 "is_configured": true, 00:11:09.030 "data_offset": 2048, 00:11:09.030 "data_size": 63488 00:11:09.030 } 00:11:09.030 ] 00:11:09.030 }' 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.030 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.291 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.291 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.291 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.291 [2024-11-08 16:52:38.731685] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.291 [2024-11-08 16:52:38.731732] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.291 [2024-11-08 16:52:38.734744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.291 [2024-11-08 16:52:38.734821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.291 [2024-11-08 16:52:38.734959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.291 [2024-11-08 16:52:38.734982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:09.291 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.291 16:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80091 00:11:09.291 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80091 ']' 00:11:09.291 { 00:11:09.291 "results": [ 00:11:09.292 { 00:11:09.292 "job": "raid_bdev1", 00:11:09.292 "core_mask": "0x1", 00:11:09.292 "workload": "randrw", 00:11:09.292 "percentage": 50, 00:11:09.292 "status": "finished", 00:11:09.292 "queue_depth": 1, 00:11:09.292 "io_size": 131072, 00:11:09.292 "runtime": 1.371885, 00:11:09.292 "iops": 13447.92019739264, 00:11:09.292 "mibps": 1680.99002467408, 00:11:09.292 "io_failed": 0, 00:11:09.292 "io_timeout": 0, 00:11:09.292 "avg_latency_us": 71.62423629308792, 00:11:09.292 "min_latency_us": 23.252401746724892, 00:11:09.292 "max_latency_us": 1702.7912663755458 00:11:09.292 } 00:11:09.292 ], 00:11:09.292 "core_count": 1 00:11:09.292 } 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80091 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80091 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.292 killing process with pid 80091 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80091' 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80091 00:11:09.292 [2024-11-08 16:52:38.776257] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.292 16:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80091 00:11:09.292 [2024-11-08 16:52:38.804130] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4yEni2i29m 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:09.551 00:11:09.551 real 0m3.306s 00:11:09.551 user 0m4.158s 00:11:09.551 sys 0m0.544s 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.551 16:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.551 ************************************ 00:11:09.551 END TEST raid_read_error_test 00:11:09.551 ************************************ 00:11:09.810 16:52:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:09.810 16:52:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:09.810 16:52:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.810 16:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.810 ************************************ 00:11:09.810 START TEST raid_write_error_test 00:11:09.810 ************************************ 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8IDpYdHbf8 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80220 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80220 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80220 ']' 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.810 16:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.810 [2024-11-08 16:52:39.221750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:09.810 [2024-11-08 16:52:39.221896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80220 ] 00:11:10.069 [2024-11-08 16:52:39.386208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.069 [2024-11-08 16:52:39.436973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.069 [2024-11-08 16:52:39.480773] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.069 [2024-11-08 16:52:39.480815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.646 BaseBdev1_malloc 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.646 true 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.646 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.647 [2024-11-08 16:52:40.155683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.647 [2024-11-08 16:52:40.155741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.647 [2024-11-08 16:52:40.155777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.647 [2024-11-08 16:52:40.155789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.647 [2024-11-08 16:52:40.157980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.647 [2024-11-08 16:52:40.158016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.647 BaseBdev1 00:11:10.647 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.647 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.647 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.647 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.647 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 BaseBdev2_malloc 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 true 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 [2024-11-08 16:52:40.192163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.905 [2024-11-08 16:52:40.192216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.905 [2024-11-08 16:52:40.192235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.905 [2024-11-08 16:52:40.192243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.905 [2024-11-08 16:52:40.194272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.905 [2024-11-08 16:52:40.194309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.905 BaseBdev2 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 BaseBdev3_malloc 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 true 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 [2024-11-08 16:52:40.220614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.905 [2024-11-08 16:52:40.220671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.905 [2024-11-08 16:52:40.220690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.905 [2024-11-08 16:52:40.220698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.905 [2024-11-08 16:52:40.222652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.905 [2024-11-08 16:52:40.222685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.905 BaseBdev3 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 [2024-11-08 16:52:40.228668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.905 [2024-11-08 16:52:40.230491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.905 [2024-11-08 16:52:40.230573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.905 [2024-11-08 16:52:40.230757] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:10.905 [2024-11-08 16:52:40.230772] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:10.905 [2024-11-08 16:52:40.231044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:10.905 [2024-11-08 16:52:40.231216] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:10.905 [2024-11-08 16:52:40.231241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:10.905 [2024-11-08 16:52:40.231365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.905 "name": "raid_bdev1", 00:11:10.905 "uuid": "c364de5b-6504-4ebc-9079-3ee39fbe8ac6", 00:11:10.905 "strip_size_kb": 0, 00:11:10.905 "state": "online", 00:11:10.905 "raid_level": "raid1", 00:11:10.905 "superblock": true, 00:11:10.905 "num_base_bdevs": 3, 00:11:10.905 "num_base_bdevs_discovered": 3, 00:11:10.905 "num_base_bdevs_operational": 3, 00:11:10.905 "base_bdevs_list": [ 00:11:10.905 { 00:11:10.905 "name": "BaseBdev1", 00:11:10.905 "uuid": "7488e42f-31b1-5531-be24-1cd7159bba2e", 00:11:10.905 "is_configured": true, 00:11:10.905 "data_offset": 2048, 00:11:10.905 "data_size": 63488 00:11:10.905 }, 00:11:10.905 { 00:11:10.905 "name": "BaseBdev2", 00:11:10.905 "uuid": "593fffba-f308-5c55-bccb-74a4df7601f8", 00:11:10.905 "is_configured": true, 00:11:10.905 "data_offset": 2048, 00:11:10.905 "data_size": 63488 00:11:10.905 }, 00:11:10.905 { 00:11:10.905 "name": "BaseBdev3", 00:11:10.905 "uuid": "687fe37c-9666-5c0f-8375-fe7dd036be0a", 00:11:10.905 "is_configured": true, 00:11:10.905 "data_offset": 2048, 00:11:10.905 "data_size": 63488 00:11:10.905 } 00:11:10.905 ] 00:11:10.905 }' 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.905 16:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.164 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:11.164 16:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:11.423 [2024-11-08 16:52:40.760146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.359 [2024-11-08 16:52:41.679283] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:12.359 [2024-11-08 16:52:41.679347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.359 [2024-11-08 16:52:41.679591] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.359 "name": "raid_bdev1", 00:11:12.359 "uuid": "c364de5b-6504-4ebc-9079-3ee39fbe8ac6", 00:11:12.359 "strip_size_kb": 0, 00:11:12.359 "state": "online", 00:11:12.359 "raid_level": "raid1", 00:11:12.359 "superblock": true, 00:11:12.359 "num_base_bdevs": 3, 00:11:12.359 "num_base_bdevs_discovered": 2, 00:11:12.359 "num_base_bdevs_operational": 2, 00:11:12.359 "base_bdevs_list": [ 00:11:12.359 { 00:11:12.359 "name": null, 00:11:12.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.359 "is_configured": false, 00:11:12.359 "data_offset": 0, 00:11:12.359 "data_size": 63488 00:11:12.359 }, 00:11:12.359 { 00:11:12.359 "name": "BaseBdev2", 00:11:12.359 "uuid": "593fffba-f308-5c55-bccb-74a4df7601f8", 00:11:12.359 "is_configured": true, 00:11:12.359 "data_offset": 2048, 00:11:12.359 "data_size": 63488 00:11:12.359 }, 00:11:12.359 { 00:11:12.359 "name": "BaseBdev3", 00:11:12.359 "uuid": "687fe37c-9666-5c0f-8375-fe7dd036be0a", 00:11:12.359 "is_configured": true, 00:11:12.359 "data_offset": 2048, 00:11:12.359 "data_size": 63488 00:11:12.359 } 00:11:12.359 ] 00:11:12.359 }' 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.359 16:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.926 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.926 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.926 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.926 [2024-11-08 16:52:42.161493] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.926 [2024-11-08 16:52:42.161533] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.926 [2024-11-08 16:52:42.164477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.927 [2024-11-08 16:52:42.164528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.927 [2024-11-08 16:52:42.164621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.927 [2024-11-08 16:52:42.164677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:12.927 { 00:11:12.927 "results": [ 00:11:12.927 { 00:11:12.927 "job": "raid_bdev1", 00:11:12.927 "core_mask": "0x1", 00:11:12.927 "workload": "randrw", 00:11:12.927 "percentage": 50, 00:11:12.927 "status": "finished", 00:11:12.927 "queue_depth": 1, 00:11:12.927 "io_size": 131072, 00:11:12.927 "runtime": 1.402122, 00:11:12.927 "iops": 15362.429232263668, 00:11:12.927 "mibps": 1920.3036540329585, 00:11:12.927 "io_failed": 0, 00:11:12.927 "io_timeout": 0, 00:11:12.927 "avg_latency_us": 62.4201692393151, 00:11:12.927 "min_latency_us": 23.252401746724892, 00:11:12.927 "max_latency_us": 1616.9362445414847 00:11:12.927 } 00:11:12.927 ], 00:11:12.927 "core_count": 1 00:11:12.927 } 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80220 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80220 ']' 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80220 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80220 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.927 killing process with pid 80220 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80220' 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80220 00:11:12.927 [2024-11-08 16:52:42.207606] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.927 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80220 00:11:12.927 [2024-11-08 16:52:42.234357] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8IDpYdHbf8 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:13.186 00:11:13.186 real 0m3.349s 00:11:13.186 user 0m4.320s 00:11:13.186 sys 0m0.528s 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.186 16:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.186 ************************************ 00:11:13.186 END TEST raid_write_error_test 00:11:13.186 ************************************ 00:11:13.186 16:52:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:13.186 16:52:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:13.186 16:52:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:13.186 16:52:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:13.186 16:52:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.186 16:52:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.186 ************************************ 00:11:13.186 START TEST raid_state_function_test 00:11:13.186 ************************************ 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80353 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.186 Process raid pid: 80353 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80353' 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80353 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80353 ']' 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.186 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.187 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.187 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.187 16:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.187 [2024-11-08 16:52:42.640978] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:13.187 [2024-11-08 16:52:42.641496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.445 [2024-11-08 16:52:42.804513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.445 [2024-11-08 16:52:42.857438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.445 [2024-11-08 16:52:42.902103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.445 [2024-11-08 16:52:42.902155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.014 [2024-11-08 16:52:43.505022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.014 [2024-11-08 16:52:43.505263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.014 [2024-11-08 16:52:43.505300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.014 [2024-11-08 16:52:43.505387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.014 [2024-11-08 16:52:43.505401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.014 [2024-11-08 16:52:43.505462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.014 [2024-11-08 16:52:43.505474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.014 [2024-11-08 16:52:43.505528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.014 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.273 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.273 "name": "Existed_Raid", 00:11:14.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.273 "strip_size_kb": 64, 00:11:14.273 "state": "configuring", 00:11:14.273 "raid_level": "raid0", 00:11:14.273 "superblock": false, 00:11:14.273 "num_base_bdevs": 4, 00:11:14.273 "num_base_bdevs_discovered": 0, 00:11:14.273 "num_base_bdevs_operational": 4, 00:11:14.273 "base_bdevs_list": [ 00:11:14.273 { 00:11:14.273 "name": "BaseBdev1", 00:11:14.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.273 "is_configured": false, 00:11:14.273 "data_offset": 0, 00:11:14.273 "data_size": 0 00:11:14.273 }, 00:11:14.273 { 00:11:14.273 "name": "BaseBdev2", 00:11:14.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.273 "is_configured": false, 00:11:14.273 "data_offset": 0, 00:11:14.273 "data_size": 0 00:11:14.273 }, 00:11:14.273 { 00:11:14.273 "name": "BaseBdev3", 00:11:14.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.273 "is_configured": false, 00:11:14.273 "data_offset": 0, 00:11:14.273 "data_size": 0 00:11:14.273 }, 00:11:14.273 { 00:11:14.273 "name": "BaseBdev4", 00:11:14.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.273 "is_configured": false, 00:11:14.273 "data_offset": 0, 00:11:14.273 "data_size": 0 00:11:14.273 } 00:11:14.273 ] 00:11:14.273 }' 00:11:14.273 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.273 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.535 [2024-11-08 16:52:43.984104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.535 [2024-11-08 16:52:43.984165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.535 [2024-11-08 16:52:43.996138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.535 [2024-11-08 16:52:43.996563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.535 [2024-11-08 16:52:43.996588] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.535 [2024-11-08 16:52:43.996671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.535 [2024-11-08 16:52:43.996681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.535 [2024-11-08 16:52:43.996752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.535 [2024-11-08 16:52:43.996765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.535 [2024-11-08 16:52:43.996813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.535 16:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.535 [2024-11-08 16:52:44.017445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.535 BaseBdev1 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.535 [ 00:11:14.535 { 00:11:14.535 "name": "BaseBdev1", 00:11:14.535 "aliases": [ 00:11:14.535 "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71" 00:11:14.535 ], 00:11:14.535 "product_name": "Malloc disk", 00:11:14.535 "block_size": 512, 00:11:14.535 "num_blocks": 65536, 00:11:14.535 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:14.535 "assigned_rate_limits": { 00:11:14.535 "rw_ios_per_sec": 0, 00:11:14.535 "rw_mbytes_per_sec": 0, 00:11:14.535 "r_mbytes_per_sec": 0, 00:11:14.535 "w_mbytes_per_sec": 0 00:11:14.535 }, 00:11:14.535 "claimed": true, 00:11:14.535 "claim_type": "exclusive_write", 00:11:14.535 "zoned": false, 00:11:14.535 "supported_io_types": { 00:11:14.535 "read": true, 00:11:14.535 "write": true, 00:11:14.535 "unmap": true, 00:11:14.535 "flush": true, 00:11:14.535 "reset": true, 00:11:14.535 "nvme_admin": false, 00:11:14.535 "nvme_io": false, 00:11:14.535 "nvme_io_md": false, 00:11:14.535 "write_zeroes": true, 00:11:14.535 "zcopy": true, 00:11:14.535 "get_zone_info": false, 00:11:14.535 "zone_management": false, 00:11:14.535 "zone_append": false, 00:11:14.535 "compare": false, 00:11:14.535 "compare_and_write": false, 00:11:14.535 "abort": true, 00:11:14.535 "seek_hole": false, 00:11:14.535 "seek_data": false, 00:11:14.535 "copy": true, 00:11:14.535 "nvme_iov_md": false 00:11:14.535 }, 00:11:14.535 "memory_domains": [ 00:11:14.535 { 00:11:14.535 "dma_device_id": "system", 00:11:14.535 "dma_device_type": 1 00:11:14.535 }, 00:11:14.535 { 00:11:14.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.535 "dma_device_type": 2 00:11:14.535 } 00:11:14.535 ], 00:11:14.535 "driver_specific": {} 00:11:14.535 } 00:11:14.535 ] 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.535 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.821 "name": "Existed_Raid", 00:11:14.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.821 "strip_size_kb": 64, 00:11:14.821 "state": "configuring", 00:11:14.821 "raid_level": "raid0", 00:11:14.821 "superblock": false, 00:11:14.821 "num_base_bdevs": 4, 00:11:14.821 "num_base_bdevs_discovered": 1, 00:11:14.821 "num_base_bdevs_operational": 4, 00:11:14.821 "base_bdevs_list": [ 00:11:14.821 { 00:11:14.821 "name": "BaseBdev1", 00:11:14.821 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:14.821 "is_configured": true, 00:11:14.821 "data_offset": 0, 00:11:14.821 "data_size": 65536 00:11:14.821 }, 00:11:14.821 { 00:11:14.821 "name": "BaseBdev2", 00:11:14.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.821 "is_configured": false, 00:11:14.821 "data_offset": 0, 00:11:14.821 "data_size": 0 00:11:14.821 }, 00:11:14.821 { 00:11:14.821 "name": "BaseBdev3", 00:11:14.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.821 "is_configured": false, 00:11:14.821 "data_offset": 0, 00:11:14.821 "data_size": 0 00:11:14.821 }, 00:11:14.821 { 00:11:14.821 "name": "BaseBdev4", 00:11:14.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.821 "is_configured": false, 00:11:14.821 "data_offset": 0, 00:11:14.821 "data_size": 0 00:11:14.821 } 00:11:14.821 ] 00:11:14.821 }' 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.821 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.079 [2024-11-08 16:52:44.508720] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.079 [2024-11-08 16:52:44.508785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.079 [2024-11-08 16:52:44.516742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.079 [2024-11-08 16:52:44.518847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.079 [2024-11-08 16:52:44.519057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.079 [2024-11-08 16:52:44.519088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.079 [2024-11-08 16:52:44.519175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.079 [2024-11-08 16:52:44.519190] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.079 [2024-11-08 16:52:44.519243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.079 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.080 "name": "Existed_Raid", 00:11:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.080 "strip_size_kb": 64, 00:11:15.080 "state": "configuring", 00:11:15.080 "raid_level": "raid0", 00:11:15.080 "superblock": false, 00:11:15.080 "num_base_bdevs": 4, 00:11:15.080 "num_base_bdevs_discovered": 1, 00:11:15.080 "num_base_bdevs_operational": 4, 00:11:15.080 "base_bdevs_list": [ 00:11:15.080 { 00:11:15.080 "name": "BaseBdev1", 00:11:15.080 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:15.080 "is_configured": true, 00:11:15.080 "data_offset": 0, 00:11:15.080 "data_size": 65536 00:11:15.080 }, 00:11:15.080 { 00:11:15.080 "name": "BaseBdev2", 00:11:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.080 "is_configured": false, 00:11:15.080 "data_offset": 0, 00:11:15.080 "data_size": 0 00:11:15.080 }, 00:11:15.080 { 00:11:15.080 "name": "BaseBdev3", 00:11:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.080 "is_configured": false, 00:11:15.080 "data_offset": 0, 00:11:15.080 "data_size": 0 00:11:15.080 }, 00:11:15.080 { 00:11:15.080 "name": "BaseBdev4", 00:11:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.080 "is_configured": false, 00:11:15.080 "data_offset": 0, 00:11:15.080 "data_size": 0 00:11:15.080 } 00:11:15.080 ] 00:11:15.080 }' 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.080 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.648 16:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.648 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.648 16:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.648 [2024-11-08 16:52:45.005606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.648 BaseBdev2 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.648 [ 00:11:15.648 { 00:11:15.648 "name": "BaseBdev2", 00:11:15.648 "aliases": [ 00:11:15.648 "92d8948c-8d42-4e62-8fc2-8448a03a1d4b" 00:11:15.648 ], 00:11:15.648 "product_name": "Malloc disk", 00:11:15.648 "block_size": 512, 00:11:15.648 "num_blocks": 65536, 00:11:15.648 "uuid": "92d8948c-8d42-4e62-8fc2-8448a03a1d4b", 00:11:15.648 "assigned_rate_limits": { 00:11:15.648 "rw_ios_per_sec": 0, 00:11:15.648 "rw_mbytes_per_sec": 0, 00:11:15.648 "r_mbytes_per_sec": 0, 00:11:15.648 "w_mbytes_per_sec": 0 00:11:15.648 }, 00:11:15.648 "claimed": true, 00:11:15.648 "claim_type": "exclusive_write", 00:11:15.648 "zoned": false, 00:11:15.648 "supported_io_types": { 00:11:15.648 "read": true, 00:11:15.648 "write": true, 00:11:15.648 "unmap": true, 00:11:15.648 "flush": true, 00:11:15.648 "reset": true, 00:11:15.648 "nvme_admin": false, 00:11:15.648 "nvme_io": false, 00:11:15.648 "nvme_io_md": false, 00:11:15.648 "write_zeroes": true, 00:11:15.648 "zcopy": true, 00:11:15.648 "get_zone_info": false, 00:11:15.648 "zone_management": false, 00:11:15.648 "zone_append": false, 00:11:15.648 "compare": false, 00:11:15.648 "compare_and_write": false, 00:11:15.648 "abort": true, 00:11:15.648 "seek_hole": false, 00:11:15.648 "seek_data": false, 00:11:15.648 "copy": true, 00:11:15.648 "nvme_iov_md": false 00:11:15.648 }, 00:11:15.648 "memory_domains": [ 00:11:15.648 { 00:11:15.648 "dma_device_id": "system", 00:11:15.648 "dma_device_type": 1 00:11:15.648 }, 00:11:15.648 { 00:11:15.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.648 "dma_device_type": 2 00:11:15.648 } 00:11:15.648 ], 00:11:15.648 "driver_specific": {} 00:11:15.648 } 00:11:15.648 ] 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.648 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.649 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.649 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.649 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.649 "name": "Existed_Raid", 00:11:15.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.649 "strip_size_kb": 64, 00:11:15.649 "state": "configuring", 00:11:15.649 "raid_level": "raid0", 00:11:15.649 "superblock": false, 00:11:15.649 "num_base_bdevs": 4, 00:11:15.649 "num_base_bdevs_discovered": 2, 00:11:15.649 "num_base_bdevs_operational": 4, 00:11:15.649 "base_bdevs_list": [ 00:11:15.649 { 00:11:15.649 "name": "BaseBdev1", 00:11:15.649 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:15.649 "is_configured": true, 00:11:15.649 "data_offset": 0, 00:11:15.649 "data_size": 65536 00:11:15.649 }, 00:11:15.649 { 00:11:15.649 "name": "BaseBdev2", 00:11:15.649 "uuid": "92d8948c-8d42-4e62-8fc2-8448a03a1d4b", 00:11:15.649 "is_configured": true, 00:11:15.649 "data_offset": 0, 00:11:15.649 "data_size": 65536 00:11:15.649 }, 00:11:15.649 { 00:11:15.649 "name": "BaseBdev3", 00:11:15.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.649 "is_configured": false, 00:11:15.649 "data_offset": 0, 00:11:15.649 "data_size": 0 00:11:15.649 }, 00:11:15.649 { 00:11:15.649 "name": "BaseBdev4", 00:11:15.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.649 "is_configured": false, 00:11:15.649 "data_offset": 0, 00:11:15.649 "data_size": 0 00:11:15.649 } 00:11:15.649 ] 00:11:15.649 }' 00:11:15.649 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.649 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.216 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.216 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.216 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.216 [2024-11-08 16:52:45.493327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.216 BaseBdev3 00:11:16.216 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.217 [ 00:11:16.217 { 00:11:16.217 "name": "BaseBdev3", 00:11:16.217 "aliases": [ 00:11:16.217 "b89bca96-706b-431c-8f39-a5c41cd43530" 00:11:16.217 ], 00:11:16.217 "product_name": "Malloc disk", 00:11:16.217 "block_size": 512, 00:11:16.217 "num_blocks": 65536, 00:11:16.217 "uuid": "b89bca96-706b-431c-8f39-a5c41cd43530", 00:11:16.217 "assigned_rate_limits": { 00:11:16.217 "rw_ios_per_sec": 0, 00:11:16.217 "rw_mbytes_per_sec": 0, 00:11:16.217 "r_mbytes_per_sec": 0, 00:11:16.217 "w_mbytes_per_sec": 0 00:11:16.217 }, 00:11:16.217 "claimed": true, 00:11:16.217 "claim_type": "exclusive_write", 00:11:16.217 "zoned": false, 00:11:16.217 "supported_io_types": { 00:11:16.217 "read": true, 00:11:16.217 "write": true, 00:11:16.217 "unmap": true, 00:11:16.217 "flush": true, 00:11:16.217 "reset": true, 00:11:16.217 "nvme_admin": false, 00:11:16.217 "nvme_io": false, 00:11:16.217 "nvme_io_md": false, 00:11:16.217 "write_zeroes": true, 00:11:16.217 "zcopy": true, 00:11:16.217 "get_zone_info": false, 00:11:16.217 "zone_management": false, 00:11:16.217 "zone_append": false, 00:11:16.217 "compare": false, 00:11:16.217 "compare_and_write": false, 00:11:16.217 "abort": true, 00:11:16.217 "seek_hole": false, 00:11:16.217 "seek_data": false, 00:11:16.217 "copy": true, 00:11:16.217 "nvme_iov_md": false 00:11:16.217 }, 00:11:16.217 "memory_domains": [ 00:11:16.217 { 00:11:16.217 "dma_device_id": "system", 00:11:16.217 "dma_device_type": 1 00:11:16.217 }, 00:11:16.217 { 00:11:16.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.217 "dma_device_type": 2 00:11:16.217 } 00:11:16.217 ], 00:11:16.217 "driver_specific": {} 00:11:16.217 } 00:11:16.217 ] 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.217 "name": "Existed_Raid", 00:11:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.217 "strip_size_kb": 64, 00:11:16.217 "state": "configuring", 00:11:16.217 "raid_level": "raid0", 00:11:16.217 "superblock": false, 00:11:16.217 "num_base_bdevs": 4, 00:11:16.217 "num_base_bdevs_discovered": 3, 00:11:16.217 "num_base_bdevs_operational": 4, 00:11:16.217 "base_bdevs_list": [ 00:11:16.217 { 00:11:16.217 "name": "BaseBdev1", 00:11:16.217 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:16.217 "is_configured": true, 00:11:16.217 "data_offset": 0, 00:11:16.217 "data_size": 65536 00:11:16.217 }, 00:11:16.217 { 00:11:16.217 "name": "BaseBdev2", 00:11:16.217 "uuid": "92d8948c-8d42-4e62-8fc2-8448a03a1d4b", 00:11:16.217 "is_configured": true, 00:11:16.217 "data_offset": 0, 00:11:16.217 "data_size": 65536 00:11:16.217 }, 00:11:16.217 { 00:11:16.217 "name": "BaseBdev3", 00:11:16.217 "uuid": "b89bca96-706b-431c-8f39-a5c41cd43530", 00:11:16.217 "is_configured": true, 00:11:16.217 "data_offset": 0, 00:11:16.217 "data_size": 65536 00:11:16.217 }, 00:11:16.217 { 00:11:16.217 "name": "BaseBdev4", 00:11:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.217 "is_configured": false, 00:11:16.217 "data_offset": 0, 00:11:16.217 "data_size": 0 00:11:16.217 } 00:11:16.217 ] 00:11:16.217 }' 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.217 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.476 [2024-11-08 16:52:45.932206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.476 [2024-11-08 16:52:45.932295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:16.476 [2024-11-08 16:52:45.932315] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:16.476 [2024-11-08 16:52:45.932700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:16.476 [2024-11-08 16:52:45.932879] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:16.476 [2024-11-08 16:52:45.932902] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:16.476 [2024-11-08 16:52:45.933117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.476 BaseBdev4 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.476 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.477 [ 00:11:16.477 { 00:11:16.477 "name": "BaseBdev4", 00:11:16.477 "aliases": [ 00:11:16.477 "351d1f56-3a1f-486a-965b-6785153c4d2b" 00:11:16.477 ], 00:11:16.477 "product_name": "Malloc disk", 00:11:16.477 "block_size": 512, 00:11:16.477 "num_blocks": 65536, 00:11:16.477 "uuid": "351d1f56-3a1f-486a-965b-6785153c4d2b", 00:11:16.477 "assigned_rate_limits": { 00:11:16.477 "rw_ios_per_sec": 0, 00:11:16.477 "rw_mbytes_per_sec": 0, 00:11:16.477 "r_mbytes_per_sec": 0, 00:11:16.477 "w_mbytes_per_sec": 0 00:11:16.477 }, 00:11:16.477 "claimed": true, 00:11:16.477 "claim_type": "exclusive_write", 00:11:16.477 "zoned": false, 00:11:16.477 "supported_io_types": { 00:11:16.477 "read": true, 00:11:16.477 "write": true, 00:11:16.477 "unmap": true, 00:11:16.477 "flush": true, 00:11:16.477 "reset": true, 00:11:16.477 "nvme_admin": false, 00:11:16.477 "nvme_io": false, 00:11:16.477 "nvme_io_md": false, 00:11:16.477 "write_zeroes": true, 00:11:16.477 "zcopy": true, 00:11:16.477 "get_zone_info": false, 00:11:16.477 "zone_management": false, 00:11:16.477 "zone_append": false, 00:11:16.477 "compare": false, 00:11:16.477 "compare_and_write": false, 00:11:16.477 "abort": true, 00:11:16.477 "seek_hole": false, 00:11:16.477 "seek_data": false, 00:11:16.477 "copy": true, 00:11:16.477 "nvme_iov_md": false 00:11:16.477 }, 00:11:16.477 "memory_domains": [ 00:11:16.477 { 00:11:16.477 "dma_device_id": "system", 00:11:16.477 "dma_device_type": 1 00:11:16.477 }, 00:11:16.477 { 00:11:16.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.477 "dma_device_type": 2 00:11:16.477 } 00:11:16.477 ], 00:11:16.477 "driver_specific": {} 00:11:16.477 } 00:11:16.477 ] 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.477 16:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.736 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.737 "name": "Existed_Raid", 00:11:16.737 "uuid": "406df535-1a4a-4c9d-a308-776f4a85faaf", 00:11:16.737 "strip_size_kb": 64, 00:11:16.737 "state": "online", 00:11:16.737 "raid_level": "raid0", 00:11:16.737 "superblock": false, 00:11:16.737 "num_base_bdevs": 4, 00:11:16.737 "num_base_bdevs_discovered": 4, 00:11:16.737 "num_base_bdevs_operational": 4, 00:11:16.737 "base_bdevs_list": [ 00:11:16.737 { 00:11:16.737 "name": "BaseBdev1", 00:11:16.737 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:16.737 "is_configured": true, 00:11:16.737 "data_offset": 0, 00:11:16.737 "data_size": 65536 00:11:16.737 }, 00:11:16.737 { 00:11:16.737 "name": "BaseBdev2", 00:11:16.737 "uuid": "92d8948c-8d42-4e62-8fc2-8448a03a1d4b", 00:11:16.737 "is_configured": true, 00:11:16.737 "data_offset": 0, 00:11:16.737 "data_size": 65536 00:11:16.737 }, 00:11:16.737 { 00:11:16.737 "name": "BaseBdev3", 00:11:16.737 "uuid": "b89bca96-706b-431c-8f39-a5c41cd43530", 00:11:16.737 "is_configured": true, 00:11:16.737 "data_offset": 0, 00:11:16.737 "data_size": 65536 00:11:16.737 }, 00:11:16.737 { 00:11:16.737 "name": "BaseBdev4", 00:11:16.737 "uuid": "351d1f56-3a1f-486a-965b-6785153c4d2b", 00:11:16.737 "is_configured": true, 00:11:16.737 "data_offset": 0, 00:11:16.737 "data_size": 65536 00:11:16.737 } 00:11:16.737 ] 00:11:16.737 }' 00:11:16.737 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.737 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.996 [2024-11-08 16:52:46.451904] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.996 "name": "Existed_Raid", 00:11:16.996 "aliases": [ 00:11:16.996 "406df535-1a4a-4c9d-a308-776f4a85faaf" 00:11:16.996 ], 00:11:16.996 "product_name": "Raid Volume", 00:11:16.996 "block_size": 512, 00:11:16.996 "num_blocks": 262144, 00:11:16.996 "uuid": "406df535-1a4a-4c9d-a308-776f4a85faaf", 00:11:16.996 "assigned_rate_limits": { 00:11:16.996 "rw_ios_per_sec": 0, 00:11:16.996 "rw_mbytes_per_sec": 0, 00:11:16.996 "r_mbytes_per_sec": 0, 00:11:16.996 "w_mbytes_per_sec": 0 00:11:16.996 }, 00:11:16.996 "claimed": false, 00:11:16.996 "zoned": false, 00:11:16.996 "supported_io_types": { 00:11:16.996 "read": true, 00:11:16.996 "write": true, 00:11:16.996 "unmap": true, 00:11:16.996 "flush": true, 00:11:16.996 "reset": true, 00:11:16.996 "nvme_admin": false, 00:11:16.996 "nvme_io": false, 00:11:16.996 "nvme_io_md": false, 00:11:16.996 "write_zeroes": true, 00:11:16.996 "zcopy": false, 00:11:16.996 "get_zone_info": false, 00:11:16.996 "zone_management": false, 00:11:16.996 "zone_append": false, 00:11:16.996 "compare": false, 00:11:16.996 "compare_and_write": false, 00:11:16.996 "abort": false, 00:11:16.996 "seek_hole": false, 00:11:16.996 "seek_data": false, 00:11:16.996 "copy": false, 00:11:16.996 "nvme_iov_md": false 00:11:16.996 }, 00:11:16.996 "memory_domains": [ 00:11:16.996 { 00:11:16.996 "dma_device_id": "system", 00:11:16.996 "dma_device_type": 1 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.996 "dma_device_type": 2 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "system", 00:11:16.996 "dma_device_type": 1 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.996 "dma_device_type": 2 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "system", 00:11:16.996 "dma_device_type": 1 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.996 "dma_device_type": 2 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "system", 00:11:16.996 "dma_device_type": 1 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.996 "dma_device_type": 2 00:11:16.996 } 00:11:16.996 ], 00:11:16.996 "driver_specific": { 00:11:16.996 "raid": { 00:11:16.996 "uuid": "406df535-1a4a-4c9d-a308-776f4a85faaf", 00:11:16.996 "strip_size_kb": 64, 00:11:16.996 "state": "online", 00:11:16.996 "raid_level": "raid0", 00:11:16.996 "superblock": false, 00:11:16.996 "num_base_bdevs": 4, 00:11:16.996 "num_base_bdevs_discovered": 4, 00:11:16.996 "num_base_bdevs_operational": 4, 00:11:16.996 "base_bdevs_list": [ 00:11:16.996 { 00:11:16.996 "name": "BaseBdev1", 00:11:16.996 "uuid": "2b5efb2d-d5ce-45aa-a9ab-daa9eba99d71", 00:11:16.996 "is_configured": true, 00:11:16.996 "data_offset": 0, 00:11:16.996 "data_size": 65536 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "name": "BaseBdev2", 00:11:16.996 "uuid": "92d8948c-8d42-4e62-8fc2-8448a03a1d4b", 00:11:16.996 "is_configured": true, 00:11:16.996 "data_offset": 0, 00:11:16.996 "data_size": 65536 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "name": "BaseBdev3", 00:11:16.996 "uuid": "b89bca96-706b-431c-8f39-a5c41cd43530", 00:11:16.996 "is_configured": true, 00:11:16.996 "data_offset": 0, 00:11:16.996 "data_size": 65536 00:11:16.996 }, 00:11:16.996 { 00:11:16.996 "name": "BaseBdev4", 00:11:16.996 "uuid": "351d1f56-3a1f-486a-965b-6785153c4d2b", 00:11:16.996 "is_configured": true, 00:11:16.996 "data_offset": 0, 00:11:16.996 "data_size": 65536 00:11:16.996 } 00:11:16.996 ] 00:11:16.996 } 00:11:16.996 } 00:11:16.996 }' 00:11:16.996 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.256 BaseBdev2 00:11:17.256 BaseBdev3 00:11:17.256 BaseBdev4' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.256 [2024-11-08 16:52:46.767171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.256 [2024-11-08 16:52:46.767213] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.256 [2024-11-08 16:52:46.767271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:17.256 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.516 "name": "Existed_Raid", 00:11:17.516 "uuid": "406df535-1a4a-4c9d-a308-776f4a85faaf", 00:11:17.516 "strip_size_kb": 64, 00:11:17.516 "state": "offline", 00:11:17.516 "raid_level": "raid0", 00:11:17.516 "superblock": false, 00:11:17.516 "num_base_bdevs": 4, 00:11:17.516 "num_base_bdevs_discovered": 3, 00:11:17.516 "num_base_bdevs_operational": 3, 00:11:17.516 "base_bdevs_list": [ 00:11:17.516 { 00:11:17.516 "name": null, 00:11:17.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.516 "is_configured": false, 00:11:17.516 "data_offset": 0, 00:11:17.516 "data_size": 65536 00:11:17.516 }, 00:11:17.516 { 00:11:17.516 "name": "BaseBdev2", 00:11:17.516 "uuid": "92d8948c-8d42-4e62-8fc2-8448a03a1d4b", 00:11:17.516 "is_configured": true, 00:11:17.516 "data_offset": 0, 00:11:17.516 "data_size": 65536 00:11:17.516 }, 00:11:17.516 { 00:11:17.516 "name": "BaseBdev3", 00:11:17.516 "uuid": "b89bca96-706b-431c-8f39-a5c41cd43530", 00:11:17.516 "is_configured": true, 00:11:17.516 "data_offset": 0, 00:11:17.516 "data_size": 65536 00:11:17.516 }, 00:11:17.516 { 00:11:17.516 "name": "BaseBdev4", 00:11:17.516 "uuid": "351d1f56-3a1f-486a-965b-6785153c4d2b", 00:11:17.516 "is_configured": true, 00:11:17.516 "data_offset": 0, 00:11:17.516 "data_size": 65536 00:11:17.516 } 00:11:17.516 ] 00:11:17.516 }' 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.516 16:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.776 [2024-11-08 16:52:47.278200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.776 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 [2024-11-08 16:52:47.333735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 [2024-11-08 16:52:47.389385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.037 [2024-11-08 16:52:47.389452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 BaseBdev2 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 [ 00:11:18.037 { 00:11:18.037 "name": "BaseBdev2", 00:11:18.037 "aliases": [ 00:11:18.037 "8aa85c78-b21e-4296-b2bb-1effb5ec196a" 00:11:18.037 ], 00:11:18.037 "product_name": "Malloc disk", 00:11:18.037 "block_size": 512, 00:11:18.037 "num_blocks": 65536, 00:11:18.037 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:18.037 "assigned_rate_limits": { 00:11:18.037 "rw_ios_per_sec": 0, 00:11:18.037 "rw_mbytes_per_sec": 0, 00:11:18.037 "r_mbytes_per_sec": 0, 00:11:18.037 "w_mbytes_per_sec": 0 00:11:18.037 }, 00:11:18.037 "claimed": false, 00:11:18.037 "zoned": false, 00:11:18.037 "supported_io_types": { 00:11:18.037 "read": true, 00:11:18.038 "write": true, 00:11:18.038 "unmap": true, 00:11:18.038 "flush": true, 00:11:18.038 "reset": true, 00:11:18.038 "nvme_admin": false, 00:11:18.038 "nvme_io": false, 00:11:18.038 "nvme_io_md": false, 00:11:18.038 "write_zeroes": true, 00:11:18.038 "zcopy": true, 00:11:18.038 "get_zone_info": false, 00:11:18.038 "zone_management": false, 00:11:18.038 "zone_append": false, 00:11:18.038 "compare": false, 00:11:18.038 "compare_and_write": false, 00:11:18.038 "abort": true, 00:11:18.038 "seek_hole": false, 00:11:18.038 "seek_data": false, 00:11:18.038 "copy": true, 00:11:18.038 "nvme_iov_md": false 00:11:18.038 }, 00:11:18.038 "memory_domains": [ 00:11:18.038 { 00:11:18.038 "dma_device_id": "system", 00:11:18.038 "dma_device_type": 1 00:11:18.038 }, 00:11:18.038 { 00:11:18.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.038 "dma_device_type": 2 00:11:18.038 } 00:11:18.038 ], 00:11:18.038 "driver_specific": {} 00:11:18.038 } 00:11:18.038 ] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.038 BaseBdev3 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.038 [ 00:11:18.038 { 00:11:18.038 "name": "BaseBdev3", 00:11:18.038 "aliases": [ 00:11:18.038 "d65564b1-3652-4803-aab1-8d44d11751be" 00:11:18.038 ], 00:11:18.038 "product_name": "Malloc disk", 00:11:18.038 "block_size": 512, 00:11:18.038 "num_blocks": 65536, 00:11:18.038 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:18.038 "assigned_rate_limits": { 00:11:18.038 "rw_ios_per_sec": 0, 00:11:18.038 "rw_mbytes_per_sec": 0, 00:11:18.038 "r_mbytes_per_sec": 0, 00:11:18.038 "w_mbytes_per_sec": 0 00:11:18.038 }, 00:11:18.038 "claimed": false, 00:11:18.038 "zoned": false, 00:11:18.038 "supported_io_types": { 00:11:18.038 "read": true, 00:11:18.038 "write": true, 00:11:18.038 "unmap": true, 00:11:18.038 "flush": true, 00:11:18.038 "reset": true, 00:11:18.038 "nvme_admin": false, 00:11:18.038 "nvme_io": false, 00:11:18.038 "nvme_io_md": false, 00:11:18.038 "write_zeroes": true, 00:11:18.038 "zcopy": true, 00:11:18.038 "get_zone_info": false, 00:11:18.038 "zone_management": false, 00:11:18.038 "zone_append": false, 00:11:18.038 "compare": false, 00:11:18.038 "compare_and_write": false, 00:11:18.038 "abort": true, 00:11:18.038 "seek_hole": false, 00:11:18.038 "seek_data": false, 00:11:18.038 "copy": true, 00:11:18.038 "nvme_iov_md": false 00:11:18.038 }, 00:11:18.038 "memory_domains": [ 00:11:18.038 { 00:11:18.038 "dma_device_id": "system", 00:11:18.038 "dma_device_type": 1 00:11:18.038 }, 00:11:18.038 { 00:11:18.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.038 "dma_device_type": 2 00:11:18.038 } 00:11:18.038 ], 00:11:18.038 "driver_specific": {} 00:11:18.038 } 00:11:18.038 ] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.038 BaseBdev4 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.038 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.298 [ 00:11:18.298 { 00:11:18.298 "name": "BaseBdev4", 00:11:18.298 "aliases": [ 00:11:18.298 "e5e12168-4e19-491a-9a25-3057e1e2741a" 00:11:18.298 ], 00:11:18.298 "product_name": "Malloc disk", 00:11:18.298 "block_size": 512, 00:11:18.298 "num_blocks": 65536, 00:11:18.298 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:18.298 "assigned_rate_limits": { 00:11:18.298 "rw_ios_per_sec": 0, 00:11:18.298 "rw_mbytes_per_sec": 0, 00:11:18.298 "r_mbytes_per_sec": 0, 00:11:18.298 "w_mbytes_per_sec": 0 00:11:18.298 }, 00:11:18.298 "claimed": false, 00:11:18.298 "zoned": false, 00:11:18.298 "supported_io_types": { 00:11:18.298 "read": true, 00:11:18.298 "write": true, 00:11:18.298 "unmap": true, 00:11:18.298 "flush": true, 00:11:18.298 "reset": true, 00:11:18.298 "nvme_admin": false, 00:11:18.298 "nvme_io": false, 00:11:18.298 "nvme_io_md": false, 00:11:18.298 "write_zeroes": true, 00:11:18.298 "zcopy": true, 00:11:18.298 "get_zone_info": false, 00:11:18.298 "zone_management": false, 00:11:18.298 "zone_append": false, 00:11:18.298 "compare": false, 00:11:18.298 "compare_and_write": false, 00:11:18.298 "abort": true, 00:11:18.298 "seek_hole": false, 00:11:18.298 "seek_data": false, 00:11:18.298 "copy": true, 00:11:18.298 "nvme_iov_md": false 00:11:18.298 }, 00:11:18.298 "memory_domains": [ 00:11:18.298 { 00:11:18.298 "dma_device_id": "system", 00:11:18.298 "dma_device_type": 1 00:11:18.298 }, 00:11:18.298 { 00:11:18.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.298 "dma_device_type": 2 00:11:18.298 } 00:11:18.298 ], 00:11:18.298 "driver_specific": {} 00:11:18.298 } 00:11:18.298 ] 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.298 [2024-11-08 16:52:47.603388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.298 [2024-11-08 16:52:47.603894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.298 [2024-11-08 16:52:47.603943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.298 [2024-11-08 16:52:47.605984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.298 [2024-11-08 16:52:47.606044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.298 "name": "Existed_Raid", 00:11:18.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.298 "strip_size_kb": 64, 00:11:18.298 "state": "configuring", 00:11:18.298 "raid_level": "raid0", 00:11:18.298 "superblock": false, 00:11:18.298 "num_base_bdevs": 4, 00:11:18.298 "num_base_bdevs_discovered": 3, 00:11:18.298 "num_base_bdevs_operational": 4, 00:11:18.298 "base_bdevs_list": [ 00:11:18.298 { 00:11:18.298 "name": "BaseBdev1", 00:11:18.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.298 "is_configured": false, 00:11:18.298 "data_offset": 0, 00:11:18.298 "data_size": 0 00:11:18.298 }, 00:11:18.298 { 00:11:18.298 "name": "BaseBdev2", 00:11:18.298 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:18.298 "is_configured": true, 00:11:18.298 "data_offset": 0, 00:11:18.298 "data_size": 65536 00:11:18.298 }, 00:11:18.298 { 00:11:18.298 "name": "BaseBdev3", 00:11:18.298 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:18.298 "is_configured": true, 00:11:18.298 "data_offset": 0, 00:11:18.298 "data_size": 65536 00:11:18.298 }, 00:11:18.298 { 00:11:18.298 "name": "BaseBdev4", 00:11:18.298 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:18.298 "is_configured": true, 00:11:18.298 "data_offset": 0, 00:11:18.298 "data_size": 65536 00:11:18.298 } 00:11:18.298 ] 00:11:18.298 }' 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.298 16:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.557 [2024-11-08 16:52:48.070575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.557 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.815 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.815 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.815 "name": "Existed_Raid", 00:11:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.815 "strip_size_kb": 64, 00:11:18.815 "state": "configuring", 00:11:18.815 "raid_level": "raid0", 00:11:18.815 "superblock": false, 00:11:18.815 "num_base_bdevs": 4, 00:11:18.815 "num_base_bdevs_discovered": 2, 00:11:18.815 "num_base_bdevs_operational": 4, 00:11:18.815 "base_bdevs_list": [ 00:11:18.815 { 00:11:18.815 "name": "BaseBdev1", 00:11:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.815 "is_configured": false, 00:11:18.815 "data_offset": 0, 00:11:18.815 "data_size": 0 00:11:18.815 }, 00:11:18.815 { 00:11:18.815 "name": null, 00:11:18.815 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:18.815 "is_configured": false, 00:11:18.815 "data_offset": 0, 00:11:18.815 "data_size": 65536 00:11:18.815 }, 00:11:18.815 { 00:11:18.815 "name": "BaseBdev3", 00:11:18.815 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:18.815 "is_configured": true, 00:11:18.815 "data_offset": 0, 00:11:18.815 "data_size": 65536 00:11:18.815 }, 00:11:18.815 { 00:11:18.815 "name": "BaseBdev4", 00:11:18.815 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:18.815 "is_configured": true, 00:11:18.815 "data_offset": 0, 00:11:18.815 "data_size": 65536 00:11:18.815 } 00:11:18.815 ] 00:11:18.815 }' 00:11:18.815 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.815 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.074 [2024-11-08 16:52:48.552780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.074 BaseBdev1 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.074 [ 00:11:19.074 { 00:11:19.074 "name": "BaseBdev1", 00:11:19.074 "aliases": [ 00:11:19.074 "9d832e39-73bc-4435-be07-ad08f8013b6a" 00:11:19.074 ], 00:11:19.074 "product_name": "Malloc disk", 00:11:19.074 "block_size": 512, 00:11:19.074 "num_blocks": 65536, 00:11:19.074 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:19.074 "assigned_rate_limits": { 00:11:19.074 "rw_ios_per_sec": 0, 00:11:19.074 "rw_mbytes_per_sec": 0, 00:11:19.074 "r_mbytes_per_sec": 0, 00:11:19.074 "w_mbytes_per_sec": 0 00:11:19.074 }, 00:11:19.074 "claimed": true, 00:11:19.074 "claim_type": "exclusive_write", 00:11:19.074 "zoned": false, 00:11:19.074 "supported_io_types": { 00:11:19.074 "read": true, 00:11:19.074 "write": true, 00:11:19.074 "unmap": true, 00:11:19.074 "flush": true, 00:11:19.074 "reset": true, 00:11:19.074 "nvme_admin": false, 00:11:19.074 "nvme_io": false, 00:11:19.074 "nvme_io_md": false, 00:11:19.074 "write_zeroes": true, 00:11:19.074 "zcopy": true, 00:11:19.074 "get_zone_info": false, 00:11:19.074 "zone_management": false, 00:11:19.074 "zone_append": false, 00:11:19.074 "compare": false, 00:11:19.074 "compare_and_write": false, 00:11:19.074 "abort": true, 00:11:19.074 "seek_hole": false, 00:11:19.074 "seek_data": false, 00:11:19.074 "copy": true, 00:11:19.074 "nvme_iov_md": false 00:11:19.074 }, 00:11:19.074 "memory_domains": [ 00:11:19.074 { 00:11:19.074 "dma_device_id": "system", 00:11:19.074 "dma_device_type": 1 00:11:19.074 }, 00:11:19.074 { 00:11:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.074 "dma_device_type": 2 00:11:19.074 } 00:11:19.074 ], 00:11:19.074 "driver_specific": {} 00:11:19.074 } 00:11:19.074 ] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.074 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.334 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.334 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.334 "name": "Existed_Raid", 00:11:19.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.334 "strip_size_kb": 64, 00:11:19.334 "state": "configuring", 00:11:19.334 "raid_level": "raid0", 00:11:19.334 "superblock": false, 00:11:19.334 "num_base_bdevs": 4, 00:11:19.334 "num_base_bdevs_discovered": 3, 00:11:19.334 "num_base_bdevs_operational": 4, 00:11:19.334 "base_bdevs_list": [ 00:11:19.334 { 00:11:19.334 "name": "BaseBdev1", 00:11:19.334 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:19.334 "is_configured": true, 00:11:19.334 "data_offset": 0, 00:11:19.334 "data_size": 65536 00:11:19.334 }, 00:11:19.334 { 00:11:19.334 "name": null, 00:11:19.334 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:19.334 "is_configured": false, 00:11:19.334 "data_offset": 0, 00:11:19.334 "data_size": 65536 00:11:19.334 }, 00:11:19.334 { 00:11:19.334 "name": "BaseBdev3", 00:11:19.334 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:19.334 "is_configured": true, 00:11:19.334 "data_offset": 0, 00:11:19.334 "data_size": 65536 00:11:19.334 }, 00:11:19.334 { 00:11:19.334 "name": "BaseBdev4", 00:11:19.334 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:19.334 "is_configured": true, 00:11:19.334 "data_offset": 0, 00:11:19.334 "data_size": 65536 00:11:19.334 } 00:11:19.334 ] 00:11:19.334 }' 00:11:19.334 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.334 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.593 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.593 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.593 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.593 16:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.593 16:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.593 [2024-11-08 16:52:49.032019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.593 "name": "Existed_Raid", 00:11:19.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.593 "strip_size_kb": 64, 00:11:19.593 "state": "configuring", 00:11:19.593 "raid_level": "raid0", 00:11:19.593 "superblock": false, 00:11:19.593 "num_base_bdevs": 4, 00:11:19.593 "num_base_bdevs_discovered": 2, 00:11:19.593 "num_base_bdevs_operational": 4, 00:11:19.593 "base_bdevs_list": [ 00:11:19.593 { 00:11:19.593 "name": "BaseBdev1", 00:11:19.593 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:19.593 "is_configured": true, 00:11:19.593 "data_offset": 0, 00:11:19.593 "data_size": 65536 00:11:19.593 }, 00:11:19.593 { 00:11:19.593 "name": null, 00:11:19.593 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:19.593 "is_configured": false, 00:11:19.593 "data_offset": 0, 00:11:19.593 "data_size": 65536 00:11:19.593 }, 00:11:19.593 { 00:11:19.593 "name": null, 00:11:19.593 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:19.593 "is_configured": false, 00:11:19.593 "data_offset": 0, 00:11:19.593 "data_size": 65536 00:11:19.593 }, 00:11:19.593 { 00:11:19.593 "name": "BaseBdev4", 00:11:19.593 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:19.593 "is_configured": true, 00:11:19.593 "data_offset": 0, 00:11:19.593 "data_size": 65536 00:11:19.593 } 00:11:19.593 ] 00:11:19.593 }' 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.593 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.219 [2024-11-08 16:52:49.559265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.219 "name": "Existed_Raid", 00:11:20.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.219 "strip_size_kb": 64, 00:11:20.219 "state": "configuring", 00:11:20.219 "raid_level": "raid0", 00:11:20.219 "superblock": false, 00:11:20.219 "num_base_bdevs": 4, 00:11:20.219 "num_base_bdevs_discovered": 3, 00:11:20.219 "num_base_bdevs_operational": 4, 00:11:20.219 "base_bdevs_list": [ 00:11:20.219 { 00:11:20.219 "name": "BaseBdev1", 00:11:20.219 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:20.219 "is_configured": true, 00:11:20.219 "data_offset": 0, 00:11:20.219 "data_size": 65536 00:11:20.219 }, 00:11:20.219 { 00:11:20.219 "name": null, 00:11:20.219 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:20.219 "is_configured": false, 00:11:20.219 "data_offset": 0, 00:11:20.219 "data_size": 65536 00:11:20.219 }, 00:11:20.219 { 00:11:20.219 "name": "BaseBdev3", 00:11:20.219 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:20.219 "is_configured": true, 00:11:20.219 "data_offset": 0, 00:11:20.219 "data_size": 65536 00:11:20.219 }, 00:11:20.219 { 00:11:20.219 "name": "BaseBdev4", 00:11:20.219 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:20.219 "is_configured": true, 00:11:20.219 "data_offset": 0, 00:11:20.219 "data_size": 65536 00:11:20.219 } 00:11:20.219 ] 00:11:20.219 }' 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.219 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.478 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.478 16:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.478 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.478 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.478 16:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.478 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:20.478 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.737 [2024-11-08 16:52:50.010553] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.737 "name": "Existed_Raid", 00:11:20.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.737 "strip_size_kb": 64, 00:11:20.737 "state": "configuring", 00:11:20.737 "raid_level": "raid0", 00:11:20.737 "superblock": false, 00:11:20.737 "num_base_bdevs": 4, 00:11:20.737 "num_base_bdevs_discovered": 2, 00:11:20.737 "num_base_bdevs_operational": 4, 00:11:20.737 "base_bdevs_list": [ 00:11:20.737 { 00:11:20.737 "name": null, 00:11:20.737 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:20.737 "is_configured": false, 00:11:20.737 "data_offset": 0, 00:11:20.737 "data_size": 65536 00:11:20.737 }, 00:11:20.737 { 00:11:20.737 "name": null, 00:11:20.737 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:20.737 "is_configured": false, 00:11:20.737 "data_offset": 0, 00:11:20.737 "data_size": 65536 00:11:20.737 }, 00:11:20.737 { 00:11:20.737 "name": "BaseBdev3", 00:11:20.737 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:20.737 "is_configured": true, 00:11:20.737 "data_offset": 0, 00:11:20.737 "data_size": 65536 00:11:20.737 }, 00:11:20.737 { 00:11:20.737 "name": "BaseBdev4", 00:11:20.737 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:20.737 "is_configured": true, 00:11:20.737 "data_offset": 0, 00:11:20.737 "data_size": 65536 00:11:20.737 } 00:11:20.737 ] 00:11:20.737 }' 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.737 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.996 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.996 [2024-11-08 16:52:50.520529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.255 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.255 "name": "Existed_Raid", 00:11:21.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.255 "strip_size_kb": 64, 00:11:21.255 "state": "configuring", 00:11:21.255 "raid_level": "raid0", 00:11:21.255 "superblock": false, 00:11:21.255 "num_base_bdevs": 4, 00:11:21.255 "num_base_bdevs_discovered": 3, 00:11:21.255 "num_base_bdevs_operational": 4, 00:11:21.255 "base_bdevs_list": [ 00:11:21.255 { 00:11:21.255 "name": null, 00:11:21.255 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:21.255 "is_configured": false, 00:11:21.255 "data_offset": 0, 00:11:21.255 "data_size": 65536 00:11:21.255 }, 00:11:21.255 { 00:11:21.255 "name": "BaseBdev2", 00:11:21.255 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:21.255 "is_configured": true, 00:11:21.255 "data_offset": 0, 00:11:21.255 "data_size": 65536 00:11:21.255 }, 00:11:21.255 { 00:11:21.255 "name": "BaseBdev3", 00:11:21.255 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:21.255 "is_configured": true, 00:11:21.255 "data_offset": 0, 00:11:21.255 "data_size": 65536 00:11:21.255 }, 00:11:21.255 { 00:11:21.255 "name": "BaseBdev4", 00:11:21.255 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:21.255 "is_configured": true, 00:11:21.255 "data_offset": 0, 00:11:21.255 "data_size": 65536 00:11:21.255 } 00:11:21.255 ] 00:11:21.255 }' 00:11:21.256 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.256 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.515 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.515 16:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.515 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.515 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.515 16:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9d832e39-73bc-4435-be07-ad08f8013b6a 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.515 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.775 [2024-11-08 16:52:51.046702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:21.775 [2024-11-08 16:52:51.046748] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:21.775 [2024-11-08 16:52:51.046757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:21.775 [2024-11-08 16:52:51.047018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:21.775 [2024-11-08 16:52:51.047167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:21.775 [2024-11-08 16:52:51.047186] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:21.775 [2024-11-08 16:52:51.047371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.775 NewBaseBdev 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.775 [ 00:11:21.775 { 00:11:21.775 "name": "NewBaseBdev", 00:11:21.775 "aliases": [ 00:11:21.775 "9d832e39-73bc-4435-be07-ad08f8013b6a" 00:11:21.775 ], 00:11:21.775 "product_name": "Malloc disk", 00:11:21.775 "block_size": 512, 00:11:21.775 "num_blocks": 65536, 00:11:21.775 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:21.775 "assigned_rate_limits": { 00:11:21.775 "rw_ios_per_sec": 0, 00:11:21.775 "rw_mbytes_per_sec": 0, 00:11:21.775 "r_mbytes_per_sec": 0, 00:11:21.775 "w_mbytes_per_sec": 0 00:11:21.775 }, 00:11:21.775 "claimed": true, 00:11:21.775 "claim_type": "exclusive_write", 00:11:21.775 "zoned": false, 00:11:21.775 "supported_io_types": { 00:11:21.775 "read": true, 00:11:21.775 "write": true, 00:11:21.775 "unmap": true, 00:11:21.775 "flush": true, 00:11:21.775 "reset": true, 00:11:21.775 "nvme_admin": false, 00:11:21.775 "nvme_io": false, 00:11:21.775 "nvme_io_md": false, 00:11:21.775 "write_zeroes": true, 00:11:21.775 "zcopy": true, 00:11:21.775 "get_zone_info": false, 00:11:21.775 "zone_management": false, 00:11:21.775 "zone_append": false, 00:11:21.775 "compare": false, 00:11:21.775 "compare_and_write": false, 00:11:21.775 "abort": true, 00:11:21.775 "seek_hole": false, 00:11:21.775 "seek_data": false, 00:11:21.775 "copy": true, 00:11:21.775 "nvme_iov_md": false 00:11:21.775 }, 00:11:21.775 "memory_domains": [ 00:11:21.775 { 00:11:21.775 "dma_device_id": "system", 00:11:21.775 "dma_device_type": 1 00:11:21.775 }, 00:11:21.775 { 00:11:21.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.775 "dma_device_type": 2 00:11:21.775 } 00:11:21.775 ], 00:11:21.775 "driver_specific": {} 00:11:21.775 } 00:11:21.775 ] 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.775 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.776 "name": "Existed_Raid", 00:11:21.776 "uuid": "ab5864cf-8643-4719-9e87-0474a43c6d1d", 00:11:21.776 "strip_size_kb": 64, 00:11:21.776 "state": "online", 00:11:21.776 "raid_level": "raid0", 00:11:21.776 "superblock": false, 00:11:21.776 "num_base_bdevs": 4, 00:11:21.776 "num_base_bdevs_discovered": 4, 00:11:21.776 "num_base_bdevs_operational": 4, 00:11:21.776 "base_bdevs_list": [ 00:11:21.776 { 00:11:21.776 "name": "NewBaseBdev", 00:11:21.776 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:21.776 "is_configured": true, 00:11:21.776 "data_offset": 0, 00:11:21.776 "data_size": 65536 00:11:21.776 }, 00:11:21.776 { 00:11:21.776 "name": "BaseBdev2", 00:11:21.776 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:21.776 "is_configured": true, 00:11:21.776 "data_offset": 0, 00:11:21.776 "data_size": 65536 00:11:21.776 }, 00:11:21.776 { 00:11:21.776 "name": "BaseBdev3", 00:11:21.776 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:21.776 "is_configured": true, 00:11:21.776 "data_offset": 0, 00:11:21.776 "data_size": 65536 00:11:21.776 }, 00:11:21.776 { 00:11:21.776 "name": "BaseBdev4", 00:11:21.776 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:21.776 "is_configured": true, 00:11:21.776 "data_offset": 0, 00:11:21.776 "data_size": 65536 00:11:21.776 } 00:11:21.776 ] 00:11:21.776 }' 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.776 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.035 [2024-11-08 16:52:51.538256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.035 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.295 "name": "Existed_Raid", 00:11:22.295 "aliases": [ 00:11:22.295 "ab5864cf-8643-4719-9e87-0474a43c6d1d" 00:11:22.295 ], 00:11:22.295 "product_name": "Raid Volume", 00:11:22.295 "block_size": 512, 00:11:22.295 "num_blocks": 262144, 00:11:22.295 "uuid": "ab5864cf-8643-4719-9e87-0474a43c6d1d", 00:11:22.295 "assigned_rate_limits": { 00:11:22.295 "rw_ios_per_sec": 0, 00:11:22.295 "rw_mbytes_per_sec": 0, 00:11:22.295 "r_mbytes_per_sec": 0, 00:11:22.295 "w_mbytes_per_sec": 0 00:11:22.295 }, 00:11:22.295 "claimed": false, 00:11:22.295 "zoned": false, 00:11:22.295 "supported_io_types": { 00:11:22.295 "read": true, 00:11:22.295 "write": true, 00:11:22.295 "unmap": true, 00:11:22.295 "flush": true, 00:11:22.295 "reset": true, 00:11:22.295 "nvme_admin": false, 00:11:22.295 "nvme_io": false, 00:11:22.295 "nvme_io_md": false, 00:11:22.295 "write_zeroes": true, 00:11:22.295 "zcopy": false, 00:11:22.295 "get_zone_info": false, 00:11:22.295 "zone_management": false, 00:11:22.295 "zone_append": false, 00:11:22.295 "compare": false, 00:11:22.295 "compare_and_write": false, 00:11:22.295 "abort": false, 00:11:22.295 "seek_hole": false, 00:11:22.295 "seek_data": false, 00:11:22.295 "copy": false, 00:11:22.295 "nvme_iov_md": false 00:11:22.295 }, 00:11:22.295 "memory_domains": [ 00:11:22.295 { 00:11:22.295 "dma_device_id": "system", 00:11:22.295 "dma_device_type": 1 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.295 "dma_device_type": 2 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "system", 00:11:22.295 "dma_device_type": 1 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.295 "dma_device_type": 2 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "system", 00:11:22.295 "dma_device_type": 1 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.295 "dma_device_type": 2 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "system", 00:11:22.295 "dma_device_type": 1 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.295 "dma_device_type": 2 00:11:22.295 } 00:11:22.295 ], 00:11:22.295 "driver_specific": { 00:11:22.295 "raid": { 00:11:22.295 "uuid": "ab5864cf-8643-4719-9e87-0474a43c6d1d", 00:11:22.295 "strip_size_kb": 64, 00:11:22.295 "state": "online", 00:11:22.295 "raid_level": "raid0", 00:11:22.295 "superblock": false, 00:11:22.295 "num_base_bdevs": 4, 00:11:22.295 "num_base_bdevs_discovered": 4, 00:11:22.295 "num_base_bdevs_operational": 4, 00:11:22.295 "base_bdevs_list": [ 00:11:22.295 { 00:11:22.295 "name": "NewBaseBdev", 00:11:22.295 "uuid": "9d832e39-73bc-4435-be07-ad08f8013b6a", 00:11:22.295 "is_configured": true, 00:11:22.295 "data_offset": 0, 00:11:22.295 "data_size": 65536 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "name": "BaseBdev2", 00:11:22.295 "uuid": "8aa85c78-b21e-4296-b2bb-1effb5ec196a", 00:11:22.295 "is_configured": true, 00:11:22.295 "data_offset": 0, 00:11:22.295 "data_size": 65536 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "name": "BaseBdev3", 00:11:22.295 "uuid": "d65564b1-3652-4803-aab1-8d44d11751be", 00:11:22.295 "is_configured": true, 00:11:22.295 "data_offset": 0, 00:11:22.295 "data_size": 65536 00:11:22.295 }, 00:11:22.295 { 00:11:22.295 "name": "BaseBdev4", 00:11:22.295 "uuid": "e5e12168-4e19-491a-9a25-3057e1e2741a", 00:11:22.295 "is_configured": true, 00:11:22.295 "data_offset": 0, 00:11:22.295 "data_size": 65536 00:11:22.295 } 00:11:22.295 ] 00:11:22.295 } 00:11:22.295 } 00:11:22.295 }' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:22.295 BaseBdev2 00:11:22.295 BaseBdev3 00:11:22.295 BaseBdev4' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.295 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.296 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.556 [2024-11-08 16:52:51.889278] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.556 [2024-11-08 16:52:51.889312] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.556 [2024-11-08 16:52:51.889410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.556 [2024-11-08 16:52:51.889491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.556 [2024-11-08 16:52:51.889506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80353 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80353 ']' 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80353 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80353 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.556 killing process with pid 80353 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80353' 00:11:22.556 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80353 00:11:22.556 [2024-11-08 16:52:51.926266] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.557 16:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80353 00:11:22.557 [2024-11-08 16:52:51.968254] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:22.817 00:11:22.817 real 0m9.673s 00:11:22.817 user 0m16.577s 00:11:22.817 sys 0m2.017s 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.817 ************************************ 00:11:22.817 END TEST raid_state_function_test 00:11:22.817 ************************************ 00:11:22.817 16:52:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:22.817 16:52:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:22.817 16:52:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.817 16:52:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.817 ************************************ 00:11:22.817 START TEST raid_state_function_test_sb 00:11:22.817 ************************************ 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81002 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81002' 00:11:22.817 Process raid pid: 81002 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81002 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81002 ']' 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.817 16:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.076 [2024-11-08 16:52:52.387150] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:23.076 [2024-11-08 16:52:52.387318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.076 [2024-11-08 16:52:52.552696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.336 [2024-11-08 16:52:52.604856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.336 [2024-11-08 16:52:52.647404] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.336 [2024-11-08 16:52:52.647454] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.905 [2024-11-08 16:52:53.269125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.905 [2024-11-08 16:52:53.269173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.905 [2024-11-08 16:52:53.269185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.905 [2024-11-08 16:52:53.269194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.905 [2024-11-08 16:52:53.269200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.905 [2024-11-08 16:52:53.269210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.905 [2024-11-08 16:52:53.269216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.905 [2024-11-08 16:52:53.269226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.905 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.906 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.906 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.906 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.906 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.906 "name": "Existed_Raid", 00:11:23.906 "uuid": "bbcacb41-6e1a-49e0-9bce-43231b3aaaeb", 00:11:23.906 "strip_size_kb": 64, 00:11:23.906 "state": "configuring", 00:11:23.906 "raid_level": "raid0", 00:11:23.906 "superblock": true, 00:11:23.906 "num_base_bdevs": 4, 00:11:23.906 "num_base_bdevs_discovered": 0, 00:11:23.906 "num_base_bdevs_operational": 4, 00:11:23.906 "base_bdevs_list": [ 00:11:23.906 { 00:11:23.906 "name": "BaseBdev1", 00:11:23.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.906 "is_configured": false, 00:11:23.906 "data_offset": 0, 00:11:23.906 "data_size": 0 00:11:23.906 }, 00:11:23.906 { 00:11:23.906 "name": "BaseBdev2", 00:11:23.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.906 "is_configured": false, 00:11:23.906 "data_offset": 0, 00:11:23.906 "data_size": 0 00:11:23.906 }, 00:11:23.906 { 00:11:23.906 "name": "BaseBdev3", 00:11:23.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.906 "is_configured": false, 00:11:23.906 "data_offset": 0, 00:11:23.906 "data_size": 0 00:11:23.906 }, 00:11:23.906 { 00:11:23.906 "name": "BaseBdev4", 00:11:23.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.906 "is_configured": false, 00:11:23.906 "data_offset": 0, 00:11:23.906 "data_size": 0 00:11:23.906 } 00:11:23.906 ] 00:11:23.906 }' 00:11:23.906 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.906 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.476 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.476 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.476 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.476 [2024-11-08 16:52:53.732253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.476 [2024-11-08 16:52:53.732321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:24.476 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.476 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.476 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.477 [2024-11-08 16:52:53.744327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.477 [2024-11-08 16:52:53.744370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.477 [2024-11-08 16:52:53.744378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.477 [2024-11-08 16:52:53.744387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.477 [2024-11-08 16:52:53.744394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.477 [2024-11-08 16:52:53.744403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.477 [2024-11-08 16:52:53.744409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.477 [2024-11-08 16:52:53.744418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.477 [2024-11-08 16:52:53.761564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.477 BaseBdev1 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.477 [ 00:11:24.477 { 00:11:24.477 "name": "BaseBdev1", 00:11:24.477 "aliases": [ 00:11:24.477 "5527dbd2-3f9b-44c8-899c-4322cc7304eb" 00:11:24.477 ], 00:11:24.477 "product_name": "Malloc disk", 00:11:24.477 "block_size": 512, 00:11:24.477 "num_blocks": 65536, 00:11:24.477 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:24.477 "assigned_rate_limits": { 00:11:24.477 "rw_ios_per_sec": 0, 00:11:24.477 "rw_mbytes_per_sec": 0, 00:11:24.477 "r_mbytes_per_sec": 0, 00:11:24.477 "w_mbytes_per_sec": 0 00:11:24.477 }, 00:11:24.477 "claimed": true, 00:11:24.477 "claim_type": "exclusive_write", 00:11:24.477 "zoned": false, 00:11:24.477 "supported_io_types": { 00:11:24.477 "read": true, 00:11:24.477 "write": true, 00:11:24.477 "unmap": true, 00:11:24.477 "flush": true, 00:11:24.477 "reset": true, 00:11:24.477 "nvme_admin": false, 00:11:24.477 "nvme_io": false, 00:11:24.477 "nvme_io_md": false, 00:11:24.477 "write_zeroes": true, 00:11:24.477 "zcopy": true, 00:11:24.477 "get_zone_info": false, 00:11:24.477 "zone_management": false, 00:11:24.477 "zone_append": false, 00:11:24.477 "compare": false, 00:11:24.477 "compare_and_write": false, 00:11:24.477 "abort": true, 00:11:24.477 "seek_hole": false, 00:11:24.477 "seek_data": false, 00:11:24.477 "copy": true, 00:11:24.477 "nvme_iov_md": false 00:11:24.477 }, 00:11:24.477 "memory_domains": [ 00:11:24.477 { 00:11:24.477 "dma_device_id": "system", 00:11:24.477 "dma_device_type": 1 00:11:24.477 }, 00:11:24.477 { 00:11:24.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.477 "dma_device_type": 2 00:11:24.477 } 00:11:24.477 ], 00:11:24.477 "driver_specific": {} 00:11:24.477 } 00:11:24.477 ] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.477 "name": "Existed_Raid", 00:11:24.477 "uuid": "ecd06bc1-8ee5-47d2-9614-ec608d8c2512", 00:11:24.477 "strip_size_kb": 64, 00:11:24.477 "state": "configuring", 00:11:24.477 "raid_level": "raid0", 00:11:24.477 "superblock": true, 00:11:24.477 "num_base_bdevs": 4, 00:11:24.477 "num_base_bdevs_discovered": 1, 00:11:24.477 "num_base_bdevs_operational": 4, 00:11:24.477 "base_bdevs_list": [ 00:11:24.477 { 00:11:24.477 "name": "BaseBdev1", 00:11:24.477 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:24.477 "is_configured": true, 00:11:24.477 "data_offset": 2048, 00:11:24.477 "data_size": 63488 00:11:24.477 }, 00:11:24.477 { 00:11:24.477 "name": "BaseBdev2", 00:11:24.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.477 "is_configured": false, 00:11:24.477 "data_offset": 0, 00:11:24.477 "data_size": 0 00:11:24.477 }, 00:11:24.477 { 00:11:24.477 "name": "BaseBdev3", 00:11:24.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.477 "is_configured": false, 00:11:24.477 "data_offset": 0, 00:11:24.477 "data_size": 0 00:11:24.477 }, 00:11:24.477 { 00:11:24.477 "name": "BaseBdev4", 00:11:24.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.477 "is_configured": false, 00:11:24.477 "data_offset": 0, 00:11:24.477 "data_size": 0 00:11:24.477 } 00:11:24.477 ] 00:11:24.477 }' 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.477 16:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.082 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.082 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.082 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.082 [2024-11-08 16:52:54.288764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.082 [2024-11-08 16:52:54.288825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:25.082 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.082 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.083 [2024-11-08 16:52:54.300763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.083 [2024-11-08 16:52:54.302685] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.083 [2024-11-08 16:52:54.302722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.083 [2024-11-08 16:52:54.302731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.083 [2024-11-08 16:52:54.302740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.083 [2024-11-08 16:52:54.302747] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:25.083 [2024-11-08 16:52:54.302755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.083 "name": "Existed_Raid", 00:11:25.083 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:25.083 "strip_size_kb": 64, 00:11:25.083 "state": "configuring", 00:11:25.083 "raid_level": "raid0", 00:11:25.083 "superblock": true, 00:11:25.083 "num_base_bdevs": 4, 00:11:25.083 "num_base_bdevs_discovered": 1, 00:11:25.083 "num_base_bdevs_operational": 4, 00:11:25.083 "base_bdevs_list": [ 00:11:25.083 { 00:11:25.083 "name": "BaseBdev1", 00:11:25.083 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:25.083 "is_configured": true, 00:11:25.083 "data_offset": 2048, 00:11:25.083 "data_size": 63488 00:11:25.083 }, 00:11:25.083 { 00:11:25.083 "name": "BaseBdev2", 00:11:25.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.083 "is_configured": false, 00:11:25.083 "data_offset": 0, 00:11:25.083 "data_size": 0 00:11:25.083 }, 00:11:25.083 { 00:11:25.083 "name": "BaseBdev3", 00:11:25.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.083 "is_configured": false, 00:11:25.083 "data_offset": 0, 00:11:25.083 "data_size": 0 00:11:25.083 }, 00:11:25.083 { 00:11:25.083 "name": "BaseBdev4", 00:11:25.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.083 "is_configured": false, 00:11:25.083 "data_offset": 0, 00:11:25.083 "data_size": 0 00:11:25.083 } 00:11:25.083 ] 00:11:25.083 }' 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.083 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 [2024-11-08 16:52:54.750131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.343 BaseBdev2 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 [ 00:11:25.343 { 00:11:25.343 "name": "BaseBdev2", 00:11:25.343 "aliases": [ 00:11:25.343 "44822eb6-a35a-4487-8b53-67e2ab288fde" 00:11:25.343 ], 00:11:25.343 "product_name": "Malloc disk", 00:11:25.343 "block_size": 512, 00:11:25.343 "num_blocks": 65536, 00:11:25.343 "uuid": "44822eb6-a35a-4487-8b53-67e2ab288fde", 00:11:25.343 "assigned_rate_limits": { 00:11:25.343 "rw_ios_per_sec": 0, 00:11:25.343 "rw_mbytes_per_sec": 0, 00:11:25.343 "r_mbytes_per_sec": 0, 00:11:25.343 "w_mbytes_per_sec": 0 00:11:25.343 }, 00:11:25.343 "claimed": true, 00:11:25.343 "claim_type": "exclusive_write", 00:11:25.343 "zoned": false, 00:11:25.343 "supported_io_types": { 00:11:25.343 "read": true, 00:11:25.343 "write": true, 00:11:25.343 "unmap": true, 00:11:25.343 "flush": true, 00:11:25.343 "reset": true, 00:11:25.343 "nvme_admin": false, 00:11:25.343 "nvme_io": false, 00:11:25.343 "nvme_io_md": false, 00:11:25.343 "write_zeroes": true, 00:11:25.343 "zcopy": true, 00:11:25.343 "get_zone_info": false, 00:11:25.343 "zone_management": false, 00:11:25.343 "zone_append": false, 00:11:25.343 "compare": false, 00:11:25.343 "compare_and_write": false, 00:11:25.343 "abort": true, 00:11:25.343 "seek_hole": false, 00:11:25.343 "seek_data": false, 00:11:25.343 "copy": true, 00:11:25.343 "nvme_iov_md": false 00:11:25.343 }, 00:11:25.343 "memory_domains": [ 00:11:25.343 { 00:11:25.343 "dma_device_id": "system", 00:11:25.343 "dma_device_type": 1 00:11:25.343 }, 00:11:25.343 { 00:11:25.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.343 "dma_device_type": 2 00:11:25.343 } 00:11:25.343 ], 00:11:25.343 "driver_specific": {} 00:11:25.343 } 00:11:25.343 ] 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.343 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.343 "name": "Existed_Raid", 00:11:25.343 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:25.343 "strip_size_kb": 64, 00:11:25.343 "state": "configuring", 00:11:25.343 "raid_level": "raid0", 00:11:25.343 "superblock": true, 00:11:25.343 "num_base_bdevs": 4, 00:11:25.343 "num_base_bdevs_discovered": 2, 00:11:25.343 "num_base_bdevs_operational": 4, 00:11:25.343 "base_bdevs_list": [ 00:11:25.343 { 00:11:25.343 "name": "BaseBdev1", 00:11:25.343 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:25.343 "is_configured": true, 00:11:25.343 "data_offset": 2048, 00:11:25.343 "data_size": 63488 00:11:25.343 }, 00:11:25.343 { 00:11:25.343 "name": "BaseBdev2", 00:11:25.343 "uuid": "44822eb6-a35a-4487-8b53-67e2ab288fde", 00:11:25.343 "is_configured": true, 00:11:25.343 "data_offset": 2048, 00:11:25.343 "data_size": 63488 00:11:25.343 }, 00:11:25.343 { 00:11:25.343 "name": "BaseBdev3", 00:11:25.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.343 "is_configured": false, 00:11:25.344 "data_offset": 0, 00:11:25.344 "data_size": 0 00:11:25.344 }, 00:11:25.344 { 00:11:25.344 "name": "BaseBdev4", 00:11:25.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.344 "is_configured": false, 00:11:25.344 "data_offset": 0, 00:11:25.344 "data_size": 0 00:11:25.344 } 00:11:25.344 ] 00:11:25.344 }' 00:11:25.344 16:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.344 16:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.911 BaseBdev3 00:11:25.911 [2024-11-08 16:52:55.216418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.911 [ 00:11:25.911 { 00:11:25.911 "name": "BaseBdev3", 00:11:25.911 "aliases": [ 00:11:25.911 "da924d42-3936-4ea7-819e-2fbd82995fa2" 00:11:25.911 ], 00:11:25.911 "product_name": "Malloc disk", 00:11:25.911 "block_size": 512, 00:11:25.911 "num_blocks": 65536, 00:11:25.911 "uuid": "da924d42-3936-4ea7-819e-2fbd82995fa2", 00:11:25.911 "assigned_rate_limits": { 00:11:25.911 "rw_ios_per_sec": 0, 00:11:25.911 "rw_mbytes_per_sec": 0, 00:11:25.911 "r_mbytes_per_sec": 0, 00:11:25.911 "w_mbytes_per_sec": 0 00:11:25.911 }, 00:11:25.911 "claimed": true, 00:11:25.911 "claim_type": "exclusive_write", 00:11:25.911 "zoned": false, 00:11:25.911 "supported_io_types": { 00:11:25.911 "read": true, 00:11:25.911 "write": true, 00:11:25.911 "unmap": true, 00:11:25.911 "flush": true, 00:11:25.911 "reset": true, 00:11:25.911 "nvme_admin": false, 00:11:25.911 "nvme_io": false, 00:11:25.911 "nvme_io_md": false, 00:11:25.911 "write_zeroes": true, 00:11:25.911 "zcopy": true, 00:11:25.911 "get_zone_info": false, 00:11:25.911 "zone_management": false, 00:11:25.911 "zone_append": false, 00:11:25.911 "compare": false, 00:11:25.911 "compare_and_write": false, 00:11:25.911 "abort": true, 00:11:25.911 "seek_hole": false, 00:11:25.911 "seek_data": false, 00:11:25.911 "copy": true, 00:11:25.911 "nvme_iov_md": false 00:11:25.911 }, 00:11:25.911 "memory_domains": [ 00:11:25.911 { 00:11:25.911 "dma_device_id": "system", 00:11:25.911 "dma_device_type": 1 00:11:25.911 }, 00:11:25.911 { 00:11:25.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.911 "dma_device_type": 2 00:11:25.911 } 00:11:25.911 ], 00:11:25.911 "driver_specific": {} 00:11:25.911 } 00:11:25.911 ] 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.911 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.911 "name": "Existed_Raid", 00:11:25.911 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:25.911 "strip_size_kb": 64, 00:11:25.911 "state": "configuring", 00:11:25.911 "raid_level": "raid0", 00:11:25.911 "superblock": true, 00:11:25.911 "num_base_bdevs": 4, 00:11:25.911 "num_base_bdevs_discovered": 3, 00:11:25.911 "num_base_bdevs_operational": 4, 00:11:25.911 "base_bdevs_list": [ 00:11:25.911 { 00:11:25.911 "name": "BaseBdev1", 00:11:25.911 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:25.911 "is_configured": true, 00:11:25.911 "data_offset": 2048, 00:11:25.911 "data_size": 63488 00:11:25.911 }, 00:11:25.911 { 00:11:25.912 "name": "BaseBdev2", 00:11:25.912 "uuid": "44822eb6-a35a-4487-8b53-67e2ab288fde", 00:11:25.912 "is_configured": true, 00:11:25.912 "data_offset": 2048, 00:11:25.912 "data_size": 63488 00:11:25.912 }, 00:11:25.912 { 00:11:25.912 "name": "BaseBdev3", 00:11:25.912 "uuid": "da924d42-3936-4ea7-819e-2fbd82995fa2", 00:11:25.912 "is_configured": true, 00:11:25.912 "data_offset": 2048, 00:11:25.912 "data_size": 63488 00:11:25.912 }, 00:11:25.912 { 00:11:25.912 "name": "BaseBdev4", 00:11:25.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.912 "is_configured": false, 00:11:25.912 "data_offset": 0, 00:11:25.912 "data_size": 0 00:11:25.912 } 00:11:25.912 ] 00:11:25.912 }' 00:11:25.912 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.912 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.171 [2024-11-08 16:52:55.654748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.171 [2024-11-08 16:52:55.654965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:26.171 [2024-11-08 16:52:55.654983] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:26.171 [2024-11-08 16:52:55.655270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:26.171 [2024-11-08 16:52:55.655407] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:26.171 [2024-11-08 16:52:55.655421] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:26.171 BaseBdev4 00:11:26.171 [2024-11-08 16:52:55.655548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.171 [ 00:11:26.171 { 00:11:26.171 "name": "BaseBdev4", 00:11:26.171 "aliases": [ 00:11:26.171 "e516f75d-59b7-4778-9589-a47f05ed46a5" 00:11:26.171 ], 00:11:26.171 "product_name": "Malloc disk", 00:11:26.171 "block_size": 512, 00:11:26.171 "num_blocks": 65536, 00:11:26.171 "uuid": "e516f75d-59b7-4778-9589-a47f05ed46a5", 00:11:26.171 "assigned_rate_limits": { 00:11:26.171 "rw_ios_per_sec": 0, 00:11:26.171 "rw_mbytes_per_sec": 0, 00:11:26.171 "r_mbytes_per_sec": 0, 00:11:26.171 "w_mbytes_per_sec": 0 00:11:26.171 }, 00:11:26.171 "claimed": true, 00:11:26.171 "claim_type": "exclusive_write", 00:11:26.171 "zoned": false, 00:11:26.171 "supported_io_types": { 00:11:26.171 "read": true, 00:11:26.171 "write": true, 00:11:26.171 "unmap": true, 00:11:26.171 "flush": true, 00:11:26.171 "reset": true, 00:11:26.171 "nvme_admin": false, 00:11:26.171 "nvme_io": false, 00:11:26.171 "nvme_io_md": false, 00:11:26.171 "write_zeroes": true, 00:11:26.171 "zcopy": true, 00:11:26.171 "get_zone_info": false, 00:11:26.171 "zone_management": false, 00:11:26.171 "zone_append": false, 00:11:26.171 "compare": false, 00:11:26.171 "compare_and_write": false, 00:11:26.171 "abort": true, 00:11:26.171 "seek_hole": false, 00:11:26.171 "seek_data": false, 00:11:26.171 "copy": true, 00:11:26.171 "nvme_iov_md": false 00:11:26.171 }, 00:11:26.171 "memory_domains": [ 00:11:26.171 { 00:11:26.171 "dma_device_id": "system", 00:11:26.171 "dma_device_type": 1 00:11:26.171 }, 00:11:26.171 { 00:11:26.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.171 "dma_device_type": 2 00:11:26.171 } 00:11:26.171 ], 00:11:26.171 "driver_specific": {} 00:11:26.171 } 00:11:26.171 ] 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.171 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.430 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.430 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.430 "name": "Existed_Raid", 00:11:26.430 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:26.430 "strip_size_kb": 64, 00:11:26.430 "state": "online", 00:11:26.430 "raid_level": "raid0", 00:11:26.430 "superblock": true, 00:11:26.430 "num_base_bdevs": 4, 00:11:26.430 "num_base_bdevs_discovered": 4, 00:11:26.430 "num_base_bdevs_operational": 4, 00:11:26.430 "base_bdevs_list": [ 00:11:26.430 { 00:11:26.430 "name": "BaseBdev1", 00:11:26.430 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:26.430 "is_configured": true, 00:11:26.430 "data_offset": 2048, 00:11:26.430 "data_size": 63488 00:11:26.430 }, 00:11:26.430 { 00:11:26.430 "name": "BaseBdev2", 00:11:26.430 "uuid": "44822eb6-a35a-4487-8b53-67e2ab288fde", 00:11:26.431 "is_configured": true, 00:11:26.431 "data_offset": 2048, 00:11:26.431 "data_size": 63488 00:11:26.431 }, 00:11:26.431 { 00:11:26.431 "name": "BaseBdev3", 00:11:26.431 "uuid": "da924d42-3936-4ea7-819e-2fbd82995fa2", 00:11:26.431 "is_configured": true, 00:11:26.431 "data_offset": 2048, 00:11:26.431 "data_size": 63488 00:11:26.431 }, 00:11:26.431 { 00:11:26.431 "name": "BaseBdev4", 00:11:26.431 "uuid": "e516f75d-59b7-4778-9589-a47f05ed46a5", 00:11:26.431 "is_configured": true, 00:11:26.431 "data_offset": 2048, 00:11:26.431 "data_size": 63488 00:11:26.431 } 00:11:26.431 ] 00:11:26.431 }' 00:11:26.431 16:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.431 16:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.690 [2024-11-08 16:52:56.162272] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.690 "name": "Existed_Raid", 00:11:26.690 "aliases": [ 00:11:26.690 "c0e7ee2b-1072-4946-a8ba-a407264baca7" 00:11:26.690 ], 00:11:26.690 "product_name": "Raid Volume", 00:11:26.690 "block_size": 512, 00:11:26.690 "num_blocks": 253952, 00:11:26.690 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:26.690 "assigned_rate_limits": { 00:11:26.690 "rw_ios_per_sec": 0, 00:11:26.690 "rw_mbytes_per_sec": 0, 00:11:26.690 "r_mbytes_per_sec": 0, 00:11:26.690 "w_mbytes_per_sec": 0 00:11:26.690 }, 00:11:26.690 "claimed": false, 00:11:26.690 "zoned": false, 00:11:26.690 "supported_io_types": { 00:11:26.690 "read": true, 00:11:26.690 "write": true, 00:11:26.690 "unmap": true, 00:11:26.690 "flush": true, 00:11:26.690 "reset": true, 00:11:26.690 "nvme_admin": false, 00:11:26.690 "nvme_io": false, 00:11:26.690 "nvme_io_md": false, 00:11:26.690 "write_zeroes": true, 00:11:26.690 "zcopy": false, 00:11:26.690 "get_zone_info": false, 00:11:26.690 "zone_management": false, 00:11:26.690 "zone_append": false, 00:11:26.690 "compare": false, 00:11:26.690 "compare_and_write": false, 00:11:26.690 "abort": false, 00:11:26.690 "seek_hole": false, 00:11:26.690 "seek_data": false, 00:11:26.690 "copy": false, 00:11:26.690 "nvme_iov_md": false 00:11:26.690 }, 00:11:26.690 "memory_domains": [ 00:11:26.690 { 00:11:26.690 "dma_device_id": "system", 00:11:26.690 "dma_device_type": 1 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.690 "dma_device_type": 2 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "system", 00:11:26.690 "dma_device_type": 1 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.690 "dma_device_type": 2 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "system", 00:11:26.690 "dma_device_type": 1 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.690 "dma_device_type": 2 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "system", 00:11:26.690 "dma_device_type": 1 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.690 "dma_device_type": 2 00:11:26.690 } 00:11:26.690 ], 00:11:26.690 "driver_specific": { 00:11:26.690 "raid": { 00:11:26.690 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:26.690 "strip_size_kb": 64, 00:11:26.690 "state": "online", 00:11:26.690 "raid_level": "raid0", 00:11:26.690 "superblock": true, 00:11:26.690 "num_base_bdevs": 4, 00:11:26.690 "num_base_bdevs_discovered": 4, 00:11:26.690 "num_base_bdevs_operational": 4, 00:11:26.690 "base_bdevs_list": [ 00:11:26.690 { 00:11:26.690 "name": "BaseBdev1", 00:11:26.690 "uuid": "5527dbd2-3f9b-44c8-899c-4322cc7304eb", 00:11:26.690 "is_configured": true, 00:11:26.690 "data_offset": 2048, 00:11:26.690 "data_size": 63488 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "name": "BaseBdev2", 00:11:26.690 "uuid": "44822eb6-a35a-4487-8b53-67e2ab288fde", 00:11:26.690 "is_configured": true, 00:11:26.690 "data_offset": 2048, 00:11:26.690 "data_size": 63488 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "name": "BaseBdev3", 00:11:26.690 "uuid": "da924d42-3936-4ea7-819e-2fbd82995fa2", 00:11:26.690 "is_configured": true, 00:11:26.690 "data_offset": 2048, 00:11:26.690 "data_size": 63488 00:11:26.690 }, 00:11:26.690 { 00:11:26.690 "name": "BaseBdev4", 00:11:26.690 "uuid": "e516f75d-59b7-4778-9589-a47f05ed46a5", 00:11:26.690 "is_configured": true, 00:11:26.690 "data_offset": 2048, 00:11:26.690 "data_size": 63488 00:11:26.690 } 00:11:26.690 ] 00:11:26.690 } 00:11:26.690 } 00:11:26.690 }' 00:11:26.690 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.949 BaseBdev2 00:11:26.949 BaseBdev3 00:11:26.949 BaseBdev4' 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.949 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 [2024-11-08 16:52:56.437483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.950 [2024-11-08 16:52:56.437517] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.950 [2024-11-08 16:52:56.437577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.208 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.208 "name": "Existed_Raid", 00:11:27.208 "uuid": "c0e7ee2b-1072-4946-a8ba-a407264baca7", 00:11:27.208 "strip_size_kb": 64, 00:11:27.208 "state": "offline", 00:11:27.208 "raid_level": "raid0", 00:11:27.208 "superblock": true, 00:11:27.208 "num_base_bdevs": 4, 00:11:27.208 "num_base_bdevs_discovered": 3, 00:11:27.208 "num_base_bdevs_operational": 3, 00:11:27.208 "base_bdevs_list": [ 00:11:27.208 { 00:11:27.208 "name": null, 00:11:27.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.208 "is_configured": false, 00:11:27.208 "data_offset": 0, 00:11:27.208 "data_size": 63488 00:11:27.208 }, 00:11:27.208 { 00:11:27.208 "name": "BaseBdev2", 00:11:27.208 "uuid": "44822eb6-a35a-4487-8b53-67e2ab288fde", 00:11:27.208 "is_configured": true, 00:11:27.208 "data_offset": 2048, 00:11:27.208 "data_size": 63488 00:11:27.208 }, 00:11:27.208 { 00:11:27.208 "name": "BaseBdev3", 00:11:27.208 "uuid": "da924d42-3936-4ea7-819e-2fbd82995fa2", 00:11:27.208 "is_configured": true, 00:11:27.208 "data_offset": 2048, 00:11:27.208 "data_size": 63488 00:11:27.208 }, 00:11:27.208 { 00:11:27.208 "name": "BaseBdev4", 00:11:27.208 "uuid": "e516f75d-59b7-4778-9589-a47f05ed46a5", 00:11:27.208 "is_configured": true, 00:11:27.208 "data_offset": 2048, 00:11:27.208 "data_size": 63488 00:11:27.208 } 00:11:27.208 ] 00:11:27.208 }' 00:11:27.208 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.208 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.467 [2024-11-08 16:52:56.971789] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.467 16:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 [2024-11-08 16:52:57.022945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 [2024-11-08 16:52:57.086274] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:27.727 [2024-11-08 16:52:57.086330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 BaseBdev2 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 [ 00:11:27.727 { 00:11:27.727 "name": "BaseBdev2", 00:11:27.727 "aliases": [ 00:11:27.727 "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9" 00:11:27.727 ], 00:11:27.727 "product_name": "Malloc disk", 00:11:27.727 "block_size": 512, 00:11:27.727 "num_blocks": 65536, 00:11:27.727 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:27.727 "assigned_rate_limits": { 00:11:27.727 "rw_ios_per_sec": 0, 00:11:27.727 "rw_mbytes_per_sec": 0, 00:11:27.727 "r_mbytes_per_sec": 0, 00:11:27.727 "w_mbytes_per_sec": 0 00:11:27.727 }, 00:11:27.727 "claimed": false, 00:11:27.727 "zoned": false, 00:11:27.727 "supported_io_types": { 00:11:27.727 "read": true, 00:11:27.727 "write": true, 00:11:27.727 "unmap": true, 00:11:27.727 "flush": true, 00:11:27.727 "reset": true, 00:11:27.727 "nvme_admin": false, 00:11:27.727 "nvme_io": false, 00:11:27.727 "nvme_io_md": false, 00:11:27.727 "write_zeroes": true, 00:11:27.727 "zcopy": true, 00:11:27.727 "get_zone_info": false, 00:11:27.727 "zone_management": false, 00:11:27.727 "zone_append": false, 00:11:27.727 "compare": false, 00:11:27.727 "compare_and_write": false, 00:11:27.727 "abort": true, 00:11:27.727 "seek_hole": false, 00:11:27.727 "seek_data": false, 00:11:27.727 "copy": true, 00:11:27.727 "nvme_iov_md": false 00:11:27.727 }, 00:11:27.727 "memory_domains": [ 00:11:27.727 { 00:11:27.727 "dma_device_id": "system", 00:11:27.727 "dma_device_type": 1 00:11:27.727 }, 00:11:27.727 { 00:11:27.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.727 "dma_device_type": 2 00:11:27.727 } 00:11:27.727 ], 00:11:27.727 "driver_specific": {} 00:11:27.727 } 00:11:27.727 ] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 BaseBdev3 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.727 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 [ 00:11:27.728 { 00:11:27.728 "name": "BaseBdev3", 00:11:27.728 "aliases": [ 00:11:27.728 "b21306fa-647a-41fe-9d23-fee41db6d7cf" 00:11:27.728 ], 00:11:27.728 "product_name": "Malloc disk", 00:11:27.728 "block_size": 512, 00:11:27.728 "num_blocks": 65536, 00:11:27.728 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:27.728 "assigned_rate_limits": { 00:11:27.728 "rw_ios_per_sec": 0, 00:11:27.728 "rw_mbytes_per_sec": 0, 00:11:27.728 "r_mbytes_per_sec": 0, 00:11:27.728 "w_mbytes_per_sec": 0 00:11:27.728 }, 00:11:27.728 "claimed": false, 00:11:27.728 "zoned": false, 00:11:27.728 "supported_io_types": { 00:11:27.728 "read": true, 00:11:27.728 "write": true, 00:11:27.728 "unmap": true, 00:11:27.728 "flush": true, 00:11:27.728 "reset": true, 00:11:27.728 "nvme_admin": false, 00:11:27.728 "nvme_io": false, 00:11:27.728 "nvme_io_md": false, 00:11:27.728 "write_zeroes": true, 00:11:27.728 "zcopy": true, 00:11:27.728 "get_zone_info": false, 00:11:27.728 "zone_management": false, 00:11:27.728 "zone_append": false, 00:11:27.728 "compare": false, 00:11:27.728 "compare_and_write": false, 00:11:27.728 "abort": true, 00:11:27.728 "seek_hole": false, 00:11:27.728 "seek_data": false, 00:11:27.728 "copy": true, 00:11:27.728 "nvme_iov_md": false 00:11:27.728 }, 00:11:27.728 "memory_domains": [ 00:11:27.728 { 00:11:27.728 "dma_device_id": "system", 00:11:27.728 "dma_device_type": 1 00:11:27.728 }, 00:11:27.728 { 00:11:27.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.728 "dma_device_type": 2 00:11:27.728 } 00:11:27.728 ], 00:11:27.728 "driver_specific": {} 00:11:27.728 } 00:11:27.728 ] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 BaseBdev4 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 [ 00:11:27.728 { 00:11:27.728 "name": "BaseBdev4", 00:11:27.728 "aliases": [ 00:11:27.728 "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7" 00:11:27.728 ], 00:11:27.728 "product_name": "Malloc disk", 00:11:27.728 "block_size": 512, 00:11:27.728 "num_blocks": 65536, 00:11:27.728 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:27.728 "assigned_rate_limits": { 00:11:27.728 "rw_ios_per_sec": 0, 00:11:27.728 "rw_mbytes_per_sec": 0, 00:11:27.728 "r_mbytes_per_sec": 0, 00:11:27.728 "w_mbytes_per_sec": 0 00:11:27.728 }, 00:11:27.728 "claimed": false, 00:11:27.728 "zoned": false, 00:11:27.728 "supported_io_types": { 00:11:27.728 "read": true, 00:11:27.728 "write": true, 00:11:27.728 "unmap": true, 00:11:27.728 "flush": true, 00:11:27.728 "reset": true, 00:11:27.728 "nvme_admin": false, 00:11:27.728 "nvme_io": false, 00:11:27.728 "nvme_io_md": false, 00:11:27.728 "write_zeroes": true, 00:11:27.728 "zcopy": true, 00:11:27.728 "get_zone_info": false, 00:11:27.728 "zone_management": false, 00:11:27.728 "zone_append": false, 00:11:27.728 "compare": false, 00:11:27.728 "compare_and_write": false, 00:11:27.728 "abort": true, 00:11:27.728 "seek_hole": false, 00:11:27.728 "seek_data": false, 00:11:27.728 "copy": true, 00:11:27.728 "nvme_iov_md": false 00:11:27.728 }, 00:11:27.728 "memory_domains": [ 00:11:27.728 { 00:11:27.728 "dma_device_id": "system", 00:11:27.728 "dma_device_type": 1 00:11:27.728 }, 00:11:27.728 { 00:11:27.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.728 "dma_device_type": 2 00:11:27.728 } 00:11:27.728 ], 00:11:27.728 "driver_specific": {} 00:11:27.728 } 00:11:27.728 ] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:27.728 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.987 [2024-11-08 16:52:57.259819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.987 [2024-11-08 16:52:57.259859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.987 [2024-11-08 16:52:57.259881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.987 [2024-11-08 16:52:57.261726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.987 [2024-11-08 16:52:57.261780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.987 "name": "Existed_Raid", 00:11:27.987 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:27.987 "strip_size_kb": 64, 00:11:27.987 "state": "configuring", 00:11:27.987 "raid_level": "raid0", 00:11:27.987 "superblock": true, 00:11:27.987 "num_base_bdevs": 4, 00:11:27.987 "num_base_bdevs_discovered": 3, 00:11:27.987 "num_base_bdevs_operational": 4, 00:11:27.987 "base_bdevs_list": [ 00:11:27.987 { 00:11:27.987 "name": "BaseBdev1", 00:11:27.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.987 "is_configured": false, 00:11:27.987 "data_offset": 0, 00:11:27.987 "data_size": 0 00:11:27.987 }, 00:11:27.987 { 00:11:27.987 "name": "BaseBdev2", 00:11:27.987 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:27.987 "is_configured": true, 00:11:27.987 "data_offset": 2048, 00:11:27.987 "data_size": 63488 00:11:27.987 }, 00:11:27.987 { 00:11:27.987 "name": "BaseBdev3", 00:11:27.987 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:27.987 "is_configured": true, 00:11:27.987 "data_offset": 2048, 00:11:27.987 "data_size": 63488 00:11:27.987 }, 00:11:27.987 { 00:11:27.987 "name": "BaseBdev4", 00:11:27.987 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:27.987 "is_configured": true, 00:11:27.987 "data_offset": 2048, 00:11:27.987 "data_size": 63488 00:11:27.987 } 00:11:27.987 ] 00:11:27.987 }' 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.987 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.246 [2024-11-08 16:52:57.679152] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.246 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.246 "name": "Existed_Raid", 00:11:28.246 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:28.246 "strip_size_kb": 64, 00:11:28.246 "state": "configuring", 00:11:28.246 "raid_level": "raid0", 00:11:28.247 "superblock": true, 00:11:28.247 "num_base_bdevs": 4, 00:11:28.247 "num_base_bdevs_discovered": 2, 00:11:28.247 "num_base_bdevs_operational": 4, 00:11:28.247 "base_bdevs_list": [ 00:11:28.247 { 00:11:28.247 "name": "BaseBdev1", 00:11:28.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.247 "is_configured": false, 00:11:28.247 "data_offset": 0, 00:11:28.247 "data_size": 0 00:11:28.247 }, 00:11:28.247 { 00:11:28.247 "name": null, 00:11:28.247 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:28.247 "is_configured": false, 00:11:28.247 "data_offset": 0, 00:11:28.247 "data_size": 63488 00:11:28.247 }, 00:11:28.247 { 00:11:28.247 "name": "BaseBdev3", 00:11:28.247 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:28.247 "is_configured": true, 00:11:28.247 "data_offset": 2048, 00:11:28.247 "data_size": 63488 00:11:28.247 }, 00:11:28.247 { 00:11:28.247 "name": "BaseBdev4", 00:11:28.247 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:28.247 "is_configured": true, 00:11:28.247 "data_offset": 2048, 00:11:28.247 "data_size": 63488 00:11:28.247 } 00:11:28.247 ] 00:11:28.247 }' 00:11:28.247 16:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.247 16:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.815 [2024-11-08 16:52:58.165326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.815 BaseBdev1 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.815 [ 00:11:28.815 { 00:11:28.815 "name": "BaseBdev1", 00:11:28.815 "aliases": [ 00:11:28.815 "be4856d5-0c7f-4c6b-9a52-48cce5fd7533" 00:11:28.815 ], 00:11:28.815 "product_name": "Malloc disk", 00:11:28.815 "block_size": 512, 00:11:28.815 "num_blocks": 65536, 00:11:28.815 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:28.815 "assigned_rate_limits": { 00:11:28.815 "rw_ios_per_sec": 0, 00:11:28.815 "rw_mbytes_per_sec": 0, 00:11:28.815 "r_mbytes_per_sec": 0, 00:11:28.815 "w_mbytes_per_sec": 0 00:11:28.815 }, 00:11:28.815 "claimed": true, 00:11:28.815 "claim_type": "exclusive_write", 00:11:28.815 "zoned": false, 00:11:28.815 "supported_io_types": { 00:11:28.815 "read": true, 00:11:28.815 "write": true, 00:11:28.815 "unmap": true, 00:11:28.815 "flush": true, 00:11:28.815 "reset": true, 00:11:28.815 "nvme_admin": false, 00:11:28.815 "nvme_io": false, 00:11:28.815 "nvme_io_md": false, 00:11:28.815 "write_zeroes": true, 00:11:28.815 "zcopy": true, 00:11:28.815 "get_zone_info": false, 00:11:28.815 "zone_management": false, 00:11:28.815 "zone_append": false, 00:11:28.815 "compare": false, 00:11:28.815 "compare_and_write": false, 00:11:28.815 "abort": true, 00:11:28.815 "seek_hole": false, 00:11:28.815 "seek_data": false, 00:11:28.815 "copy": true, 00:11:28.815 "nvme_iov_md": false 00:11:28.815 }, 00:11:28.815 "memory_domains": [ 00:11:28.815 { 00:11:28.815 "dma_device_id": "system", 00:11:28.815 "dma_device_type": 1 00:11:28.815 }, 00:11:28.815 { 00:11:28.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.815 "dma_device_type": 2 00:11:28.815 } 00:11:28.815 ], 00:11:28.815 "driver_specific": {} 00:11:28.815 } 00:11:28.815 ] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.815 "name": "Existed_Raid", 00:11:28.815 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:28.815 "strip_size_kb": 64, 00:11:28.815 "state": "configuring", 00:11:28.815 "raid_level": "raid0", 00:11:28.815 "superblock": true, 00:11:28.815 "num_base_bdevs": 4, 00:11:28.815 "num_base_bdevs_discovered": 3, 00:11:28.815 "num_base_bdevs_operational": 4, 00:11:28.815 "base_bdevs_list": [ 00:11:28.815 { 00:11:28.815 "name": "BaseBdev1", 00:11:28.815 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:28.815 "is_configured": true, 00:11:28.815 "data_offset": 2048, 00:11:28.815 "data_size": 63488 00:11:28.815 }, 00:11:28.815 { 00:11:28.815 "name": null, 00:11:28.815 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:28.815 "is_configured": false, 00:11:28.815 "data_offset": 0, 00:11:28.815 "data_size": 63488 00:11:28.815 }, 00:11:28.815 { 00:11:28.815 "name": "BaseBdev3", 00:11:28.815 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:28.815 "is_configured": true, 00:11:28.815 "data_offset": 2048, 00:11:28.815 "data_size": 63488 00:11:28.815 }, 00:11:28.815 { 00:11:28.815 "name": "BaseBdev4", 00:11:28.815 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:28.815 "is_configured": true, 00:11:28.815 "data_offset": 2048, 00:11:28.815 "data_size": 63488 00:11:28.815 } 00:11:28.815 ] 00:11:28.815 }' 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.815 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 [2024-11-08 16:52:58.724451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.384 "name": "Existed_Raid", 00:11:29.384 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:29.384 "strip_size_kb": 64, 00:11:29.384 "state": "configuring", 00:11:29.384 "raid_level": "raid0", 00:11:29.384 "superblock": true, 00:11:29.384 "num_base_bdevs": 4, 00:11:29.384 "num_base_bdevs_discovered": 2, 00:11:29.384 "num_base_bdevs_operational": 4, 00:11:29.384 "base_bdevs_list": [ 00:11:29.384 { 00:11:29.384 "name": "BaseBdev1", 00:11:29.384 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:29.384 "is_configured": true, 00:11:29.384 "data_offset": 2048, 00:11:29.384 "data_size": 63488 00:11:29.384 }, 00:11:29.384 { 00:11:29.384 "name": null, 00:11:29.384 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:29.384 "is_configured": false, 00:11:29.384 "data_offset": 0, 00:11:29.384 "data_size": 63488 00:11:29.384 }, 00:11:29.384 { 00:11:29.384 "name": null, 00:11:29.384 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:29.384 "is_configured": false, 00:11:29.384 "data_offset": 0, 00:11:29.384 "data_size": 63488 00:11:29.384 }, 00:11:29.384 { 00:11:29.384 "name": "BaseBdev4", 00:11:29.384 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:29.384 "is_configured": true, 00:11:29.384 "data_offset": 2048, 00:11:29.384 "data_size": 63488 00:11:29.384 } 00:11:29.384 ] 00:11:29.384 }' 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.384 16:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.952 [2024-11-08 16:52:59.227666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.952 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.953 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.953 "name": "Existed_Raid", 00:11:29.953 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:29.953 "strip_size_kb": 64, 00:11:29.953 "state": "configuring", 00:11:29.953 "raid_level": "raid0", 00:11:29.953 "superblock": true, 00:11:29.953 "num_base_bdevs": 4, 00:11:29.953 "num_base_bdevs_discovered": 3, 00:11:29.953 "num_base_bdevs_operational": 4, 00:11:29.953 "base_bdevs_list": [ 00:11:29.953 { 00:11:29.953 "name": "BaseBdev1", 00:11:29.953 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:29.953 "is_configured": true, 00:11:29.953 "data_offset": 2048, 00:11:29.953 "data_size": 63488 00:11:29.953 }, 00:11:29.953 { 00:11:29.953 "name": null, 00:11:29.953 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:29.953 "is_configured": false, 00:11:29.953 "data_offset": 0, 00:11:29.953 "data_size": 63488 00:11:29.953 }, 00:11:29.953 { 00:11:29.953 "name": "BaseBdev3", 00:11:29.953 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:29.953 "is_configured": true, 00:11:29.953 "data_offset": 2048, 00:11:29.953 "data_size": 63488 00:11:29.953 }, 00:11:29.953 { 00:11:29.953 "name": "BaseBdev4", 00:11:29.953 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:29.953 "is_configured": true, 00:11:29.953 "data_offset": 2048, 00:11:29.953 "data_size": 63488 00:11:29.953 } 00:11:29.953 ] 00:11:29.953 }' 00:11:29.953 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.953 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.213 [2024-11-08 16:52:59.706826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.213 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.471 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.471 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.471 "name": "Existed_Raid", 00:11:30.471 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:30.471 "strip_size_kb": 64, 00:11:30.471 "state": "configuring", 00:11:30.471 "raid_level": "raid0", 00:11:30.471 "superblock": true, 00:11:30.471 "num_base_bdevs": 4, 00:11:30.471 "num_base_bdevs_discovered": 2, 00:11:30.471 "num_base_bdevs_operational": 4, 00:11:30.471 "base_bdevs_list": [ 00:11:30.471 { 00:11:30.471 "name": null, 00:11:30.471 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:30.471 "is_configured": false, 00:11:30.471 "data_offset": 0, 00:11:30.471 "data_size": 63488 00:11:30.471 }, 00:11:30.471 { 00:11:30.471 "name": null, 00:11:30.471 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:30.471 "is_configured": false, 00:11:30.471 "data_offset": 0, 00:11:30.471 "data_size": 63488 00:11:30.471 }, 00:11:30.471 { 00:11:30.471 "name": "BaseBdev3", 00:11:30.471 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:30.471 "is_configured": true, 00:11:30.471 "data_offset": 2048, 00:11:30.471 "data_size": 63488 00:11:30.471 }, 00:11:30.471 { 00:11:30.471 "name": "BaseBdev4", 00:11:30.471 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:30.471 "is_configured": true, 00:11:30.471 "data_offset": 2048, 00:11:30.471 "data_size": 63488 00:11:30.471 } 00:11:30.471 ] 00:11:30.471 }' 00:11:30.471 16:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.471 16:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.729 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.729 [2024-11-08 16:53:00.252468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.988 "name": "Existed_Raid", 00:11:30.988 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:30.988 "strip_size_kb": 64, 00:11:30.988 "state": "configuring", 00:11:30.988 "raid_level": "raid0", 00:11:30.988 "superblock": true, 00:11:30.988 "num_base_bdevs": 4, 00:11:30.988 "num_base_bdevs_discovered": 3, 00:11:30.988 "num_base_bdevs_operational": 4, 00:11:30.988 "base_bdevs_list": [ 00:11:30.988 { 00:11:30.988 "name": null, 00:11:30.988 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:30.988 "is_configured": false, 00:11:30.988 "data_offset": 0, 00:11:30.988 "data_size": 63488 00:11:30.988 }, 00:11:30.988 { 00:11:30.988 "name": "BaseBdev2", 00:11:30.988 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:30.988 "is_configured": true, 00:11:30.988 "data_offset": 2048, 00:11:30.988 "data_size": 63488 00:11:30.988 }, 00:11:30.988 { 00:11:30.988 "name": "BaseBdev3", 00:11:30.988 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:30.988 "is_configured": true, 00:11:30.988 "data_offset": 2048, 00:11:30.988 "data_size": 63488 00:11:30.988 }, 00:11:30.988 { 00:11:30.988 "name": "BaseBdev4", 00:11:30.988 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:30.988 "is_configured": true, 00:11:30.988 "data_offset": 2048, 00:11:30.988 "data_size": 63488 00:11:30.988 } 00:11:30.988 ] 00:11:30.988 }' 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.988 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:31.246 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be4856d5-0c7f-4c6b-9a52-48cce5fd7533 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 [2024-11-08 16:53:00.814406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:31.505 [2024-11-08 16:53:00.814592] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:31.505 [2024-11-08 16:53:00.814606] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:31.505 [2024-11-08 16:53:00.814865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:31.505 [2024-11-08 16:53:00.815010] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:31.505 [2024-11-08 16:53:00.815024] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:31.505 NewBaseBdev 00:11:31.505 [2024-11-08 16:53:00.815131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.505 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.505 [ 00:11:31.505 { 00:11:31.505 "name": "NewBaseBdev", 00:11:31.505 "aliases": [ 00:11:31.505 "be4856d5-0c7f-4c6b-9a52-48cce5fd7533" 00:11:31.505 ], 00:11:31.505 "product_name": "Malloc disk", 00:11:31.505 "block_size": 512, 00:11:31.505 "num_blocks": 65536, 00:11:31.505 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:31.505 "assigned_rate_limits": { 00:11:31.505 "rw_ios_per_sec": 0, 00:11:31.505 "rw_mbytes_per_sec": 0, 00:11:31.505 "r_mbytes_per_sec": 0, 00:11:31.505 "w_mbytes_per_sec": 0 00:11:31.505 }, 00:11:31.505 "claimed": true, 00:11:31.505 "claim_type": "exclusive_write", 00:11:31.505 "zoned": false, 00:11:31.505 "supported_io_types": { 00:11:31.505 "read": true, 00:11:31.505 "write": true, 00:11:31.505 "unmap": true, 00:11:31.505 "flush": true, 00:11:31.505 "reset": true, 00:11:31.505 "nvme_admin": false, 00:11:31.505 "nvme_io": false, 00:11:31.505 "nvme_io_md": false, 00:11:31.505 "write_zeroes": true, 00:11:31.505 "zcopy": true, 00:11:31.505 "get_zone_info": false, 00:11:31.505 "zone_management": false, 00:11:31.505 "zone_append": false, 00:11:31.505 "compare": false, 00:11:31.505 "compare_and_write": false, 00:11:31.505 "abort": true, 00:11:31.505 "seek_hole": false, 00:11:31.505 "seek_data": false, 00:11:31.505 "copy": true, 00:11:31.505 "nvme_iov_md": false 00:11:31.505 }, 00:11:31.505 "memory_domains": [ 00:11:31.505 { 00:11:31.505 "dma_device_id": "system", 00:11:31.505 "dma_device_type": 1 00:11:31.505 }, 00:11:31.505 { 00:11:31.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.505 "dma_device_type": 2 00:11:31.505 } 00:11:31.505 ], 00:11:31.506 "driver_specific": {} 00:11:31.506 } 00:11:31.506 ] 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.506 "name": "Existed_Raid", 00:11:31.506 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:31.506 "strip_size_kb": 64, 00:11:31.506 "state": "online", 00:11:31.506 "raid_level": "raid0", 00:11:31.506 "superblock": true, 00:11:31.506 "num_base_bdevs": 4, 00:11:31.506 "num_base_bdevs_discovered": 4, 00:11:31.506 "num_base_bdevs_operational": 4, 00:11:31.506 "base_bdevs_list": [ 00:11:31.506 { 00:11:31.506 "name": "NewBaseBdev", 00:11:31.506 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:31.506 "is_configured": true, 00:11:31.506 "data_offset": 2048, 00:11:31.506 "data_size": 63488 00:11:31.506 }, 00:11:31.506 { 00:11:31.506 "name": "BaseBdev2", 00:11:31.506 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:31.506 "is_configured": true, 00:11:31.506 "data_offset": 2048, 00:11:31.506 "data_size": 63488 00:11:31.506 }, 00:11:31.506 { 00:11:31.506 "name": "BaseBdev3", 00:11:31.506 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:31.506 "is_configured": true, 00:11:31.506 "data_offset": 2048, 00:11:31.506 "data_size": 63488 00:11:31.506 }, 00:11:31.506 { 00:11:31.506 "name": "BaseBdev4", 00:11:31.506 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:31.506 "is_configured": true, 00:11:31.506 "data_offset": 2048, 00:11:31.506 "data_size": 63488 00:11:31.506 } 00:11:31.506 ] 00:11:31.506 }' 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.506 16:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.765 [2024-11-08 16:53:01.258056] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.765 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.024 "name": "Existed_Raid", 00:11:32.024 "aliases": [ 00:11:32.024 "18998728-5ca7-40f5-bb50-6578d3cabe8e" 00:11:32.024 ], 00:11:32.024 "product_name": "Raid Volume", 00:11:32.024 "block_size": 512, 00:11:32.024 "num_blocks": 253952, 00:11:32.024 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:32.024 "assigned_rate_limits": { 00:11:32.024 "rw_ios_per_sec": 0, 00:11:32.024 "rw_mbytes_per_sec": 0, 00:11:32.024 "r_mbytes_per_sec": 0, 00:11:32.024 "w_mbytes_per_sec": 0 00:11:32.024 }, 00:11:32.024 "claimed": false, 00:11:32.024 "zoned": false, 00:11:32.024 "supported_io_types": { 00:11:32.024 "read": true, 00:11:32.024 "write": true, 00:11:32.024 "unmap": true, 00:11:32.024 "flush": true, 00:11:32.024 "reset": true, 00:11:32.024 "nvme_admin": false, 00:11:32.024 "nvme_io": false, 00:11:32.024 "nvme_io_md": false, 00:11:32.024 "write_zeroes": true, 00:11:32.024 "zcopy": false, 00:11:32.024 "get_zone_info": false, 00:11:32.024 "zone_management": false, 00:11:32.024 "zone_append": false, 00:11:32.024 "compare": false, 00:11:32.024 "compare_and_write": false, 00:11:32.024 "abort": false, 00:11:32.024 "seek_hole": false, 00:11:32.024 "seek_data": false, 00:11:32.024 "copy": false, 00:11:32.024 "nvme_iov_md": false 00:11:32.024 }, 00:11:32.024 "memory_domains": [ 00:11:32.024 { 00:11:32.024 "dma_device_id": "system", 00:11:32.024 "dma_device_type": 1 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.024 "dma_device_type": 2 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "system", 00:11:32.024 "dma_device_type": 1 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.024 "dma_device_type": 2 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "system", 00:11:32.024 "dma_device_type": 1 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.024 "dma_device_type": 2 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "system", 00:11:32.024 "dma_device_type": 1 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.024 "dma_device_type": 2 00:11:32.024 } 00:11:32.024 ], 00:11:32.024 "driver_specific": { 00:11:32.024 "raid": { 00:11:32.024 "uuid": "18998728-5ca7-40f5-bb50-6578d3cabe8e", 00:11:32.024 "strip_size_kb": 64, 00:11:32.024 "state": "online", 00:11:32.024 "raid_level": "raid0", 00:11:32.024 "superblock": true, 00:11:32.024 "num_base_bdevs": 4, 00:11:32.024 "num_base_bdevs_discovered": 4, 00:11:32.024 "num_base_bdevs_operational": 4, 00:11:32.024 "base_bdevs_list": [ 00:11:32.024 { 00:11:32.024 "name": "NewBaseBdev", 00:11:32.024 "uuid": "be4856d5-0c7f-4c6b-9a52-48cce5fd7533", 00:11:32.024 "is_configured": true, 00:11:32.024 "data_offset": 2048, 00:11:32.024 "data_size": 63488 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "name": "BaseBdev2", 00:11:32.024 "uuid": "48bfab7e-ad09-4113-912f-0a2bd1a3a6d9", 00:11:32.024 "is_configured": true, 00:11:32.024 "data_offset": 2048, 00:11:32.024 "data_size": 63488 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "name": "BaseBdev3", 00:11:32.024 "uuid": "b21306fa-647a-41fe-9d23-fee41db6d7cf", 00:11:32.024 "is_configured": true, 00:11:32.024 "data_offset": 2048, 00:11:32.024 "data_size": 63488 00:11:32.024 }, 00:11:32.024 { 00:11:32.024 "name": "BaseBdev4", 00:11:32.024 "uuid": "a3e29dfc-ef2c-4392-b52e-33e1335a9fe7", 00:11:32.024 "is_configured": true, 00:11:32.024 "data_offset": 2048, 00:11:32.024 "data_size": 63488 00:11:32.024 } 00:11:32.024 ] 00:11:32.024 } 00:11:32.024 } 00:11:32.024 }' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.024 BaseBdev2 00:11:32.024 BaseBdev3 00:11:32.024 BaseBdev4' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.024 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.284 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.284 [2024-11-08 16:53:01.609102] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.284 [2024-11-08 16:53:01.609141] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.284 [2024-11-08 16:53:01.609239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.285 [2024-11-08 16:53:01.609313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.285 [2024-11-08 16:53:01.609324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81002 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81002 ']' 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81002 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81002 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.285 killing process with pid 81002 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81002' 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81002 00:11:32.285 [2024-11-08 16:53:01.652032] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.285 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81002 00:11:32.285 [2024-11-08 16:53:01.694587] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.544 16:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.544 00:11:32.544 real 0m9.649s 00:11:32.544 user 0m16.543s 00:11:32.544 sys 0m2.035s 00:11:32.544 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.544 16:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.544 ************************************ 00:11:32.544 END TEST raid_state_function_test_sb 00:11:32.544 ************************************ 00:11:32.544 16:53:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:32.544 16:53:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.544 16:53:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.544 16:53:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.544 ************************************ 00:11:32.544 START TEST raid_superblock_test 00:11:32.544 ************************************ 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81655 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81655 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81655 ']' 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.544 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.804 [2024-11-08 16:53:02.097247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:32.805 [2024-11-08 16:53:02.097417] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81655 ] 00:11:32.805 [2024-11-08 16:53:02.239659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.805 [2024-11-08 16:53:02.285300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.805 [2024-11-08 16:53:02.327421] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.805 [2024-11-08 16:53:02.327467] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 malloc1 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 [2024-11-08 16:53:02.965569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.744 [2024-11-08 16:53:02.965666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.744 [2024-11-08 16:53:02.965689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:33.744 [2024-11-08 16:53:02.965711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.744 [2024-11-08 16:53:02.968022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.744 [2024-11-08 16:53:02.968063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.744 pt1 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 malloc2 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 [2024-11-08 16:53:03.005259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.744 [2024-11-08 16:53:03.005332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.744 [2024-11-08 16:53:03.005352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.744 [2024-11-08 16:53:03.005366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.744 [2024-11-08 16:53:03.007957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.744 [2024-11-08 16:53:03.008000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.744 pt2 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 malloc3 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 [2024-11-08 16:53:03.038380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.744 [2024-11-08 16:53:03.038446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.744 [2024-11-08 16:53:03.038468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:33.744 [2024-11-08 16:53:03.038481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.744 [2024-11-08 16:53:03.041062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.744 [2024-11-08 16:53:03.041106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.744 pt3 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 malloc4 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.744 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 [2024-11-08 16:53:03.071602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.744 [2024-11-08 16:53:03.071683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.744 [2024-11-08 16:53:03.071701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.744 [2024-11-08 16:53:03.071715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.745 [2024-11-08 16:53:03.074104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.745 [2024-11-08 16:53:03.074146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.745 pt4 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.745 [2024-11-08 16:53:03.083699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.745 [2024-11-08 16:53:03.085851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.745 [2024-11-08 16:53:03.085926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.745 [2024-11-08 16:53:03.086002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.745 [2024-11-08 16:53:03.086188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:33.745 [2024-11-08 16:53:03.086212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.745 [2024-11-08 16:53:03.086542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:33.745 [2024-11-08 16:53:03.086761] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:33.745 [2024-11-08 16:53:03.086782] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:33.745 [2024-11-08 16:53:03.086957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.745 "name": "raid_bdev1", 00:11:33.745 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:33.745 "strip_size_kb": 64, 00:11:33.745 "state": "online", 00:11:33.745 "raid_level": "raid0", 00:11:33.745 "superblock": true, 00:11:33.745 "num_base_bdevs": 4, 00:11:33.745 "num_base_bdevs_discovered": 4, 00:11:33.745 "num_base_bdevs_operational": 4, 00:11:33.745 "base_bdevs_list": [ 00:11:33.745 { 00:11:33.745 "name": "pt1", 00:11:33.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.745 "is_configured": true, 00:11:33.745 "data_offset": 2048, 00:11:33.745 "data_size": 63488 00:11:33.745 }, 00:11:33.745 { 00:11:33.745 "name": "pt2", 00:11:33.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.745 "is_configured": true, 00:11:33.745 "data_offset": 2048, 00:11:33.745 "data_size": 63488 00:11:33.745 }, 00:11:33.745 { 00:11:33.745 "name": "pt3", 00:11:33.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.745 "is_configured": true, 00:11:33.745 "data_offset": 2048, 00:11:33.745 "data_size": 63488 00:11:33.745 }, 00:11:33.745 { 00:11:33.745 "name": "pt4", 00:11:33.745 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.745 "is_configured": true, 00:11:33.745 "data_offset": 2048, 00:11:33.745 "data_size": 63488 00:11:33.745 } 00:11:33.745 ] 00:11:33.745 }' 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.745 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.004 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.004 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.004 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.004 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.004 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.004 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.263 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.263 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.263 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.263 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.263 [2024-11-08 16:53:03.539214] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.263 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.263 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.263 "name": "raid_bdev1", 00:11:34.263 "aliases": [ 00:11:34.263 "0f9f13c6-1937-4408-9fb8-4b47658d71a9" 00:11:34.263 ], 00:11:34.263 "product_name": "Raid Volume", 00:11:34.263 "block_size": 512, 00:11:34.263 "num_blocks": 253952, 00:11:34.263 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:34.263 "assigned_rate_limits": { 00:11:34.263 "rw_ios_per_sec": 0, 00:11:34.263 "rw_mbytes_per_sec": 0, 00:11:34.263 "r_mbytes_per_sec": 0, 00:11:34.263 "w_mbytes_per_sec": 0 00:11:34.263 }, 00:11:34.263 "claimed": false, 00:11:34.263 "zoned": false, 00:11:34.263 "supported_io_types": { 00:11:34.263 "read": true, 00:11:34.263 "write": true, 00:11:34.263 "unmap": true, 00:11:34.263 "flush": true, 00:11:34.263 "reset": true, 00:11:34.263 "nvme_admin": false, 00:11:34.263 "nvme_io": false, 00:11:34.263 "nvme_io_md": false, 00:11:34.263 "write_zeroes": true, 00:11:34.263 "zcopy": false, 00:11:34.263 "get_zone_info": false, 00:11:34.263 "zone_management": false, 00:11:34.263 "zone_append": false, 00:11:34.263 "compare": false, 00:11:34.263 "compare_and_write": false, 00:11:34.263 "abort": false, 00:11:34.263 "seek_hole": false, 00:11:34.263 "seek_data": false, 00:11:34.263 "copy": false, 00:11:34.263 "nvme_iov_md": false 00:11:34.263 }, 00:11:34.263 "memory_domains": [ 00:11:34.263 { 00:11:34.263 "dma_device_id": "system", 00:11:34.263 "dma_device_type": 1 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.263 "dma_device_type": 2 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "system", 00:11:34.263 "dma_device_type": 1 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.263 "dma_device_type": 2 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "system", 00:11:34.263 "dma_device_type": 1 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.263 "dma_device_type": 2 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "system", 00:11:34.263 "dma_device_type": 1 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.263 "dma_device_type": 2 00:11:34.263 } 00:11:34.263 ], 00:11:34.263 "driver_specific": { 00:11:34.263 "raid": { 00:11:34.263 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:34.263 "strip_size_kb": 64, 00:11:34.263 "state": "online", 00:11:34.263 "raid_level": "raid0", 00:11:34.263 "superblock": true, 00:11:34.263 "num_base_bdevs": 4, 00:11:34.263 "num_base_bdevs_discovered": 4, 00:11:34.263 "num_base_bdevs_operational": 4, 00:11:34.263 "base_bdevs_list": [ 00:11:34.263 { 00:11:34.263 "name": "pt1", 00:11:34.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.263 "is_configured": true, 00:11:34.263 "data_offset": 2048, 00:11:34.263 "data_size": 63488 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "name": "pt2", 00:11:34.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.263 "is_configured": true, 00:11:34.263 "data_offset": 2048, 00:11:34.263 "data_size": 63488 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "name": "pt3", 00:11:34.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.263 "is_configured": true, 00:11:34.263 "data_offset": 2048, 00:11:34.263 "data_size": 63488 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "name": "pt4", 00:11:34.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.263 "is_configured": true, 00:11:34.263 "data_offset": 2048, 00:11:34.263 "data_size": 63488 00:11:34.263 } 00:11:34.263 ] 00:11:34.264 } 00:11:34.264 } 00:11:34.264 }' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.264 pt2 00:11:34.264 pt3 00:11:34.264 pt4' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.264 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.523 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 [2024-11-08 16:53:03.854720] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f9f13c6-1937-4408-9fb8-4b47658d71a9 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0f9f13c6-1937-4408-9fb8-4b47658d71a9 ']' 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 [2024-11-08 16:53:03.894293] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.524 [2024-11-08 16:53:03.894344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.524 [2024-11-08 16:53:03.894438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.524 [2024-11-08 16:53:03.894516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.524 [2024-11-08 16:53:03.894527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 16:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.524 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.524 [2024-11-08 16:53:04.046080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:34.524 [2024-11-08 16:53:04.048019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:34.524 [2024-11-08 16:53:04.048075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:34.524 [2024-11-08 16:53:04.048105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:34.524 [2024-11-08 16:53:04.048156] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:34.524 [2024-11-08 16:53:04.048221] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:34.524 [2024-11-08 16:53:04.048243] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:34.524 [2024-11-08 16:53:04.048271] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:34.524 [2024-11-08 16:53:04.048285] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.524 [2024-11-08 16:53:04.048295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:34.784 request: 00:11:34.784 { 00:11:34.784 "name": "raid_bdev1", 00:11:34.784 "raid_level": "raid0", 00:11:34.784 "base_bdevs": [ 00:11:34.784 "malloc1", 00:11:34.784 "malloc2", 00:11:34.784 "malloc3", 00:11:34.784 "malloc4" 00:11:34.784 ], 00:11:34.784 "strip_size_kb": 64, 00:11:34.784 "superblock": false, 00:11:34.784 "method": "bdev_raid_create", 00:11:34.784 "req_id": 1 00:11:34.784 } 00:11:34.784 Got JSON-RPC error response 00:11:34.784 response: 00:11:34.784 { 00:11:34.784 "code": -17, 00:11:34.784 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:34.784 } 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.784 [2024-11-08 16:53:04.097943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:34.784 [2024-11-08 16:53:04.098016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.784 [2024-11-08 16:53:04.098043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.784 [2024-11-08 16:53:04.098055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.784 [2024-11-08 16:53:04.100600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.784 [2024-11-08 16:53:04.100652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:34.784 [2024-11-08 16:53:04.100746] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:34.784 [2024-11-08 16:53:04.100809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:34.784 pt1 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.784 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.784 "name": "raid_bdev1", 00:11:34.784 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:34.784 "strip_size_kb": 64, 00:11:34.784 "state": "configuring", 00:11:34.784 "raid_level": "raid0", 00:11:34.784 "superblock": true, 00:11:34.784 "num_base_bdevs": 4, 00:11:34.784 "num_base_bdevs_discovered": 1, 00:11:34.784 "num_base_bdevs_operational": 4, 00:11:34.784 "base_bdevs_list": [ 00:11:34.784 { 00:11:34.784 "name": "pt1", 00:11:34.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.784 "is_configured": true, 00:11:34.784 "data_offset": 2048, 00:11:34.784 "data_size": 63488 00:11:34.784 }, 00:11:34.784 { 00:11:34.784 "name": null, 00:11:34.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.784 "is_configured": false, 00:11:34.784 "data_offset": 2048, 00:11:34.784 "data_size": 63488 00:11:34.784 }, 00:11:34.785 { 00:11:34.785 "name": null, 00:11:34.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.785 "is_configured": false, 00:11:34.785 "data_offset": 2048, 00:11:34.785 "data_size": 63488 00:11:34.785 }, 00:11:34.785 { 00:11:34.785 "name": null, 00:11:34.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.785 "is_configured": false, 00:11:34.785 "data_offset": 2048, 00:11:34.785 "data_size": 63488 00:11:34.785 } 00:11:34.785 ] 00:11:34.785 }' 00:11:34.785 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.785 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 [2024-11-08 16:53:04.605099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.353 [2024-11-08 16:53:04.605173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.353 [2024-11-08 16:53:04.605195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:35.353 [2024-11-08 16:53:04.605204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.353 [2024-11-08 16:53:04.605683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.353 [2024-11-08 16:53:04.605714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.353 [2024-11-08 16:53:04.605802] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.353 [2024-11-08 16:53:04.605832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.353 pt2 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 [2024-11-08 16:53:04.617083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.353 "name": "raid_bdev1", 00:11:35.353 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:35.353 "strip_size_kb": 64, 00:11:35.353 "state": "configuring", 00:11:35.353 "raid_level": "raid0", 00:11:35.353 "superblock": true, 00:11:35.353 "num_base_bdevs": 4, 00:11:35.353 "num_base_bdevs_discovered": 1, 00:11:35.353 "num_base_bdevs_operational": 4, 00:11:35.353 "base_bdevs_list": [ 00:11:35.353 { 00:11:35.353 "name": "pt1", 00:11:35.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.353 "is_configured": true, 00:11:35.353 "data_offset": 2048, 00:11:35.353 "data_size": 63488 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "name": null, 00:11:35.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.353 "is_configured": false, 00:11:35.353 "data_offset": 0, 00:11:35.353 "data_size": 63488 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "name": null, 00:11:35.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.353 "is_configured": false, 00:11:35.353 "data_offset": 2048, 00:11:35.353 "data_size": 63488 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "name": null, 00:11:35.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.353 "is_configured": false, 00:11:35.353 "data_offset": 2048, 00:11:35.353 "data_size": 63488 00:11:35.353 } 00:11:35.353 ] 00:11:35.353 }' 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.353 16:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 [2024-11-08 16:53:05.084273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.613 [2024-11-08 16:53:05.084370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.613 [2024-11-08 16:53:05.084387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:35.613 [2024-11-08 16:53:05.084398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.613 [2024-11-08 16:53:05.084824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.613 [2024-11-08 16:53:05.084852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.613 [2024-11-08 16:53:05.084925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.613 [2024-11-08 16:53:05.084953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.613 pt2 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 [2024-11-08 16:53:05.096203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.613 [2024-11-08 16:53:05.096263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.613 [2024-11-08 16:53:05.096281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:35.613 [2024-11-08 16:53:05.096292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.613 [2024-11-08 16:53:05.096689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.613 [2024-11-08 16:53:05.096716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.613 [2024-11-08 16:53:05.096777] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:35.613 [2024-11-08 16:53:05.096803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.613 pt3 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 [2024-11-08 16:53:05.108204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:35.613 [2024-11-08 16:53:05.108262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.613 [2024-11-08 16:53:05.108277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:35.613 [2024-11-08 16:53:05.108287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.613 [2024-11-08 16:53:05.108618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.613 [2024-11-08 16:53:05.108651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:35.613 [2024-11-08 16:53:05.108711] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:35.613 [2024-11-08 16:53:05.108732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:35.613 [2024-11-08 16:53:05.108831] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:35.613 [2024-11-08 16:53:05.108844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.613 [2024-11-08 16:53:05.109079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:35.613 [2024-11-08 16:53:05.109230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:35.613 [2024-11-08 16:53:05.109244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:35.613 [2024-11-08 16:53:05.109351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.613 pt4 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.613 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.614 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.873 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.873 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.873 "name": "raid_bdev1", 00:11:35.873 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:35.873 "strip_size_kb": 64, 00:11:35.873 "state": "online", 00:11:35.873 "raid_level": "raid0", 00:11:35.873 "superblock": true, 00:11:35.873 "num_base_bdevs": 4, 00:11:35.873 "num_base_bdevs_discovered": 4, 00:11:35.873 "num_base_bdevs_operational": 4, 00:11:35.873 "base_bdevs_list": [ 00:11:35.873 { 00:11:35.873 "name": "pt1", 00:11:35.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 2048, 00:11:35.873 "data_size": 63488 00:11:35.873 }, 00:11:35.873 { 00:11:35.873 "name": "pt2", 00:11:35.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 2048, 00:11:35.873 "data_size": 63488 00:11:35.873 }, 00:11:35.873 { 00:11:35.873 "name": "pt3", 00:11:35.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 2048, 00:11:35.873 "data_size": 63488 00:11:35.873 }, 00:11:35.873 { 00:11:35.873 "name": "pt4", 00:11:35.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 2048, 00:11:35.873 "data_size": 63488 00:11:35.873 } 00:11:35.873 ] 00:11:35.873 }' 00:11:35.873 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.873 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.133 [2024-11-08 16:53:05.591793] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.133 "name": "raid_bdev1", 00:11:36.133 "aliases": [ 00:11:36.133 "0f9f13c6-1937-4408-9fb8-4b47658d71a9" 00:11:36.133 ], 00:11:36.133 "product_name": "Raid Volume", 00:11:36.133 "block_size": 512, 00:11:36.133 "num_blocks": 253952, 00:11:36.133 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:36.133 "assigned_rate_limits": { 00:11:36.133 "rw_ios_per_sec": 0, 00:11:36.133 "rw_mbytes_per_sec": 0, 00:11:36.133 "r_mbytes_per_sec": 0, 00:11:36.133 "w_mbytes_per_sec": 0 00:11:36.133 }, 00:11:36.133 "claimed": false, 00:11:36.133 "zoned": false, 00:11:36.133 "supported_io_types": { 00:11:36.133 "read": true, 00:11:36.133 "write": true, 00:11:36.133 "unmap": true, 00:11:36.133 "flush": true, 00:11:36.133 "reset": true, 00:11:36.133 "nvme_admin": false, 00:11:36.133 "nvme_io": false, 00:11:36.133 "nvme_io_md": false, 00:11:36.133 "write_zeroes": true, 00:11:36.133 "zcopy": false, 00:11:36.133 "get_zone_info": false, 00:11:36.133 "zone_management": false, 00:11:36.133 "zone_append": false, 00:11:36.133 "compare": false, 00:11:36.133 "compare_and_write": false, 00:11:36.133 "abort": false, 00:11:36.133 "seek_hole": false, 00:11:36.133 "seek_data": false, 00:11:36.133 "copy": false, 00:11:36.133 "nvme_iov_md": false 00:11:36.133 }, 00:11:36.133 "memory_domains": [ 00:11:36.133 { 00:11:36.133 "dma_device_id": "system", 00:11:36.133 "dma_device_type": 1 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.133 "dma_device_type": 2 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "system", 00:11:36.133 "dma_device_type": 1 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.133 "dma_device_type": 2 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "system", 00:11:36.133 "dma_device_type": 1 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.133 "dma_device_type": 2 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "system", 00:11:36.133 "dma_device_type": 1 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.133 "dma_device_type": 2 00:11:36.133 } 00:11:36.133 ], 00:11:36.133 "driver_specific": { 00:11:36.133 "raid": { 00:11:36.133 "uuid": "0f9f13c6-1937-4408-9fb8-4b47658d71a9", 00:11:36.133 "strip_size_kb": 64, 00:11:36.133 "state": "online", 00:11:36.133 "raid_level": "raid0", 00:11:36.133 "superblock": true, 00:11:36.133 "num_base_bdevs": 4, 00:11:36.133 "num_base_bdevs_discovered": 4, 00:11:36.133 "num_base_bdevs_operational": 4, 00:11:36.133 "base_bdevs_list": [ 00:11:36.133 { 00:11:36.133 "name": "pt1", 00:11:36.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.133 "is_configured": true, 00:11:36.133 "data_offset": 2048, 00:11:36.133 "data_size": 63488 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "name": "pt2", 00:11:36.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.133 "is_configured": true, 00:11:36.133 "data_offset": 2048, 00:11:36.133 "data_size": 63488 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "name": "pt3", 00:11:36.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.133 "is_configured": true, 00:11:36.133 "data_offset": 2048, 00:11:36.133 "data_size": 63488 00:11:36.133 }, 00:11:36.133 { 00:11:36.133 "name": "pt4", 00:11:36.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.133 "is_configured": true, 00:11:36.133 "data_offset": 2048, 00:11:36.133 "data_size": 63488 00:11:36.133 } 00:11:36.133 ] 00:11:36.133 } 00:11:36.133 } 00:11:36.133 }' 00:11:36.133 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.393 pt2 00:11:36.393 pt3 00:11:36.393 pt4' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.393 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.394 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.653 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.653 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.653 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:36.653 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.653 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.654 [2024-11-08 16:53:05.939274] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0f9f13c6-1937-4408-9fb8-4b47658d71a9 '!=' 0f9f13c6-1937-4408-9fb8-4b47658d71a9 ']' 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81655 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81655 ']' 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81655 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.654 16:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81655 00:11:36.654 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.654 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.654 killing process with pid 81655 00:11:36.654 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81655' 00:11:36.654 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81655 00:11:36.654 [2024-11-08 16:53:06.021820] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.654 [2024-11-08 16:53:06.021936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.654 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81655 00:11:36.654 [2024-11-08 16:53:06.022024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.654 [2024-11-08 16:53:06.022036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:36.654 [2024-11-08 16:53:06.067381] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.914 16:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:36.914 00:11:36.914 real 0m4.302s 00:11:36.914 user 0m6.797s 00:11:36.914 sys 0m0.964s 00:11:36.914 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.914 16:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.914 ************************************ 00:11:36.914 END TEST raid_superblock_test 00:11:36.914 ************************************ 00:11:36.914 16:53:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:36.914 16:53:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:36.914 16:53:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.914 16:53:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.914 ************************************ 00:11:36.914 START TEST raid_read_error_test 00:11:36.914 ************************************ 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tlYQISh0xn 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81904 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81904 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81904 ']' 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:36.914 16:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.174 [2024-11-08 16:53:06.479594] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:37.174 [2024-11-08 16:53:06.479747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81904 ] 00:11:37.174 [2024-11-08 16:53:06.626809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.174 [2024-11-08 16:53:06.676096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.433 [2024-11-08 16:53:06.720119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.433 [2024-11-08 16:53:06.720163] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 BaseBdev1_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 true 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 [2024-11-08 16:53:07.358883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:38.001 [2024-11-08 16:53:07.358945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.001 [2024-11-08 16:53:07.358983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:38.001 [2024-11-08 16:53:07.358993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.001 [2024-11-08 16:53:07.361326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.001 [2024-11-08 16:53:07.361363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.001 BaseBdev1 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 BaseBdev2_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 true 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 [2024-11-08 16:53:07.410622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.001 [2024-11-08 16:53:07.410693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.001 [2024-11-08 16:53:07.410723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:38.001 [2024-11-08 16:53:07.410733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.001 [2024-11-08 16:53:07.412891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.001 [2024-11-08 16:53:07.412927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.001 BaseBdev2 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 BaseBdev3_malloc 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.002 true 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.002 [2024-11-08 16:53:07.451359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:38.002 [2024-11-08 16:53:07.451410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.002 [2024-11-08 16:53:07.451430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:38.002 [2024-11-08 16:53:07.451439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.002 [2024-11-08 16:53:07.453557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.002 [2024-11-08 16:53:07.453641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:38.002 BaseBdev3 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.002 BaseBdev4_malloc 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.002 true 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.002 [2024-11-08 16:53:07.492132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:38.002 [2024-11-08 16:53:07.492181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.002 [2024-11-08 16:53:07.492205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:38.002 [2024-11-08 16:53:07.492214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.002 [2024-11-08 16:53:07.494382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.002 [2024-11-08 16:53:07.494462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:38.002 BaseBdev4 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.002 [2024-11-08 16:53:07.504141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.002 [2024-11-08 16:53:07.505981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.002 [2024-11-08 16:53:07.506070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.002 [2024-11-08 16:53:07.506125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.002 [2024-11-08 16:53:07.506320] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:38.002 [2024-11-08 16:53:07.506333] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.002 [2024-11-08 16:53:07.506588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:38.002 [2024-11-08 16:53:07.506732] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:38.002 [2024-11-08 16:53:07.506744] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:38.002 [2024-11-08 16:53:07.506871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.002 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.260 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.260 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.260 "name": "raid_bdev1", 00:11:38.260 "uuid": "1133c195-916f-4f3a-a6a1-96456a110056", 00:11:38.260 "strip_size_kb": 64, 00:11:38.260 "state": "online", 00:11:38.260 "raid_level": "raid0", 00:11:38.260 "superblock": true, 00:11:38.260 "num_base_bdevs": 4, 00:11:38.260 "num_base_bdevs_discovered": 4, 00:11:38.260 "num_base_bdevs_operational": 4, 00:11:38.260 "base_bdevs_list": [ 00:11:38.260 { 00:11:38.260 "name": "BaseBdev1", 00:11:38.260 "uuid": "876e12ab-089c-51c1-96a5-e56d4f71864b", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 }, 00:11:38.260 { 00:11:38.260 "name": "BaseBdev2", 00:11:38.260 "uuid": "4d8a89fc-64fb-5f77-bb0a-1936e8d20e5f", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 }, 00:11:38.260 { 00:11:38.260 "name": "BaseBdev3", 00:11:38.260 "uuid": "4051f72d-4924-591f-9a6c-ae29fd977e7c", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 }, 00:11:38.260 { 00:11:38.260 "name": "BaseBdev4", 00:11:38.260 "uuid": "5bb9bb3f-895e-50dc-9d07-0d3e8ee15e18", 00:11:38.260 "is_configured": true, 00:11:38.260 "data_offset": 2048, 00:11:38.260 "data_size": 63488 00:11:38.260 } 00:11:38.260 ] 00:11:38.260 }' 00:11:38.260 16:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.260 16:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.519 16:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:38.519 16:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:38.776 [2024-11-08 16:53:08.123526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:39.712 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.713 "name": "raid_bdev1", 00:11:39.713 "uuid": "1133c195-916f-4f3a-a6a1-96456a110056", 00:11:39.713 "strip_size_kb": 64, 00:11:39.713 "state": "online", 00:11:39.713 "raid_level": "raid0", 00:11:39.713 "superblock": true, 00:11:39.713 "num_base_bdevs": 4, 00:11:39.713 "num_base_bdevs_discovered": 4, 00:11:39.713 "num_base_bdevs_operational": 4, 00:11:39.713 "base_bdevs_list": [ 00:11:39.713 { 00:11:39.713 "name": "BaseBdev1", 00:11:39.713 "uuid": "876e12ab-089c-51c1-96a5-e56d4f71864b", 00:11:39.713 "is_configured": true, 00:11:39.713 "data_offset": 2048, 00:11:39.713 "data_size": 63488 00:11:39.713 }, 00:11:39.713 { 00:11:39.713 "name": "BaseBdev2", 00:11:39.713 "uuid": "4d8a89fc-64fb-5f77-bb0a-1936e8d20e5f", 00:11:39.713 "is_configured": true, 00:11:39.713 "data_offset": 2048, 00:11:39.713 "data_size": 63488 00:11:39.713 }, 00:11:39.713 { 00:11:39.713 "name": "BaseBdev3", 00:11:39.713 "uuid": "4051f72d-4924-591f-9a6c-ae29fd977e7c", 00:11:39.713 "is_configured": true, 00:11:39.713 "data_offset": 2048, 00:11:39.713 "data_size": 63488 00:11:39.713 }, 00:11:39.713 { 00:11:39.713 "name": "BaseBdev4", 00:11:39.713 "uuid": "5bb9bb3f-895e-50dc-9d07-0d3e8ee15e18", 00:11:39.713 "is_configured": true, 00:11:39.713 "data_offset": 2048, 00:11:39.713 "data_size": 63488 00:11:39.713 } 00:11:39.713 ] 00:11:39.713 }' 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.713 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.971 [2024-11-08 16:53:09.471847] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.971 [2024-11-08 16:53:09.471952] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.971 [2024-11-08 16:53:09.474546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.971 [2024-11-08 16:53:09.474656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.971 [2024-11-08 16:53:09.474747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.971 [2024-11-08 16:53:09.474810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:39.971 { 00:11:39.971 "results": [ 00:11:39.971 { 00:11:39.971 "job": "raid_bdev1", 00:11:39.971 "core_mask": "0x1", 00:11:39.971 "workload": "randrw", 00:11:39.971 "percentage": 50, 00:11:39.971 "status": "finished", 00:11:39.971 "queue_depth": 1, 00:11:39.971 "io_size": 131072, 00:11:39.971 "runtime": 1.349079, 00:11:39.971 "iops": 15117.721052658888, 00:11:39.971 "mibps": 1889.715131582361, 00:11:39.971 "io_failed": 1, 00:11:39.971 "io_timeout": 0, 00:11:39.971 "avg_latency_us": 91.80492518868758, 00:11:39.971 "min_latency_us": 25.9353711790393, 00:11:39.971 "max_latency_us": 1523.926637554585 00:11:39.971 } 00:11:39.971 ], 00:11:39.971 "core_count": 1 00:11:39.971 } 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81904 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81904 ']' 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81904 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.971 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81904 00:11:40.229 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.229 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.229 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81904' 00:11:40.229 killing process with pid 81904 00:11:40.229 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81904 00:11:40.229 [2024-11-08 16:53:09.508900] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.229 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81904 00:11:40.229 [2024-11-08 16:53:09.545375] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tlYQISh0xn 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:40.487 00:11:40.487 real 0m3.414s 00:11:40.487 user 0m4.341s 00:11:40.487 sys 0m0.560s 00:11:40.487 ************************************ 00:11:40.487 END TEST raid_read_error_test 00:11:40.487 ************************************ 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.487 16:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.487 16:53:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:40.487 16:53:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:40.487 16:53:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.487 16:53:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.487 ************************************ 00:11:40.487 START TEST raid_write_error_test 00:11:40.487 ************************************ 00:11:40.487 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iwimv9UA4k 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82033 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82033 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82033 ']' 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.488 16:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.488 [2024-11-08 16:53:09.971906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:40.488 [2024-11-08 16:53:09.972703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82033 ] 00:11:40.746 [2024-11-08 16:53:10.142892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.746 [2024-11-08 16:53:10.198027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.746 [2024-11-08 16:53:10.243157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.746 [2024-11-08 16:53:10.243190] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.313 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.313 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:41.313 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.313 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.313 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.313 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.572 BaseBdev1_malloc 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.572 true 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.572 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.572 [2024-11-08 16:53:10.858571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.572 [2024-11-08 16:53:10.858628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.573 [2024-11-08 16:53:10.858664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.573 [2024-11-08 16:53:10.858675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.573 [2024-11-08 16:53:10.861184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.573 [2024-11-08 16:53:10.861226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.573 BaseBdev1 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 BaseBdev2_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 true 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 [2024-11-08 16:53:10.905796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.573 [2024-11-08 16:53:10.905854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.573 [2024-11-08 16:53:10.905876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.573 [2024-11-08 16:53:10.905887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.573 [2024-11-08 16:53:10.908329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.573 [2024-11-08 16:53:10.908373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.573 BaseBdev2 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 BaseBdev3_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 true 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 [2024-11-08 16:53:10.939226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:41.573 [2024-11-08 16:53:10.939288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.573 [2024-11-08 16:53:10.939310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:41.573 [2024-11-08 16:53:10.939321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.573 [2024-11-08 16:53:10.941748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.573 [2024-11-08 16:53:10.941786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:41.573 BaseBdev3 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 BaseBdev4_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 true 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 [2024-11-08 16:53:10.972919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:41.573 [2024-11-08 16:53:10.973020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.573 [2024-11-08 16:53:10.973050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.573 [2024-11-08 16:53:10.973060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.573 [2024-11-08 16:53:10.975452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.573 [2024-11-08 16:53:10.975494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:41.573 BaseBdev4 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 [2024-11-08 16:53:10.980976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.573 [2024-11-08 16:53:10.983064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.573 [2024-11-08 16:53:10.983180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.573 [2024-11-08 16:53:10.983247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.573 [2024-11-08 16:53:10.983472] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:41.573 [2024-11-08 16:53:10.983486] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.573 [2024-11-08 16:53:10.983819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:41.573 [2024-11-08 16:53:10.984046] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:41.573 [2024-11-08 16:53:10.984068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:41.573 [2024-11-08 16:53:10.984203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.573 16:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.573 16:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.573 16:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.573 "name": "raid_bdev1", 00:11:41.573 "uuid": "0f23ab95-89a0-43cc-b0d6-8ed300c990ed", 00:11:41.573 "strip_size_kb": 64, 00:11:41.573 "state": "online", 00:11:41.573 "raid_level": "raid0", 00:11:41.573 "superblock": true, 00:11:41.573 "num_base_bdevs": 4, 00:11:41.573 "num_base_bdevs_discovered": 4, 00:11:41.573 "num_base_bdevs_operational": 4, 00:11:41.573 "base_bdevs_list": [ 00:11:41.573 { 00:11:41.573 "name": "BaseBdev1", 00:11:41.573 "uuid": "e4018839-2450-5f6c-a837-5ed208c7a69b", 00:11:41.573 "is_configured": true, 00:11:41.573 "data_offset": 2048, 00:11:41.573 "data_size": 63488 00:11:41.573 }, 00:11:41.573 { 00:11:41.573 "name": "BaseBdev2", 00:11:41.573 "uuid": "13862da1-81f4-5e98-ae53-df128235c01e", 00:11:41.573 "is_configured": true, 00:11:41.573 "data_offset": 2048, 00:11:41.573 "data_size": 63488 00:11:41.573 }, 00:11:41.573 { 00:11:41.573 "name": "BaseBdev3", 00:11:41.573 "uuid": "bdad4c38-d13c-5acd-ac3e-24731a0e9c76", 00:11:41.573 "is_configured": true, 00:11:41.573 "data_offset": 2048, 00:11:41.573 "data_size": 63488 00:11:41.573 }, 00:11:41.573 { 00:11:41.573 "name": "BaseBdev4", 00:11:41.574 "uuid": "ed35636a-11f2-5068-83a7-2772a7bfca99", 00:11:41.574 "is_configured": true, 00:11:41.574 "data_offset": 2048, 00:11:41.574 "data_size": 63488 00:11:41.574 } 00:11:41.574 ] 00:11:41.574 }' 00:11:41.574 16:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.574 16:53:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.147 16:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:42.147 16:53:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:42.147 [2024-11-08 16:53:11.556467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.085 "name": "raid_bdev1", 00:11:43.085 "uuid": "0f23ab95-89a0-43cc-b0d6-8ed300c990ed", 00:11:43.085 "strip_size_kb": 64, 00:11:43.085 "state": "online", 00:11:43.085 "raid_level": "raid0", 00:11:43.085 "superblock": true, 00:11:43.085 "num_base_bdevs": 4, 00:11:43.085 "num_base_bdevs_discovered": 4, 00:11:43.085 "num_base_bdevs_operational": 4, 00:11:43.085 "base_bdevs_list": [ 00:11:43.085 { 00:11:43.085 "name": "BaseBdev1", 00:11:43.085 "uuid": "e4018839-2450-5f6c-a837-5ed208c7a69b", 00:11:43.085 "is_configured": true, 00:11:43.085 "data_offset": 2048, 00:11:43.085 "data_size": 63488 00:11:43.085 }, 00:11:43.085 { 00:11:43.085 "name": "BaseBdev2", 00:11:43.085 "uuid": "13862da1-81f4-5e98-ae53-df128235c01e", 00:11:43.085 "is_configured": true, 00:11:43.085 "data_offset": 2048, 00:11:43.085 "data_size": 63488 00:11:43.085 }, 00:11:43.085 { 00:11:43.085 "name": "BaseBdev3", 00:11:43.085 "uuid": "bdad4c38-d13c-5acd-ac3e-24731a0e9c76", 00:11:43.085 "is_configured": true, 00:11:43.085 "data_offset": 2048, 00:11:43.085 "data_size": 63488 00:11:43.085 }, 00:11:43.085 { 00:11:43.085 "name": "BaseBdev4", 00:11:43.085 "uuid": "ed35636a-11f2-5068-83a7-2772a7bfca99", 00:11:43.085 "is_configured": true, 00:11:43.085 "data_offset": 2048, 00:11:43.085 "data_size": 63488 00:11:43.085 } 00:11:43.085 ] 00:11:43.085 }' 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.085 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.653 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.653 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.653 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.653 [2024-11-08 16:53:12.969741] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.653 [2024-11-08 16:53:12.969783] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.653 [2024-11-08 16:53:12.972789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.653 [2024-11-08 16:53:12.972904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.653 [2024-11-08 16:53:12.972965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.653 [2024-11-08 16:53:12.972977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:43.653 { 00:11:43.653 "results": [ 00:11:43.653 { 00:11:43.653 "job": "raid_bdev1", 00:11:43.653 "core_mask": "0x1", 00:11:43.653 "workload": "randrw", 00:11:43.653 "percentage": 50, 00:11:43.653 "status": "finished", 00:11:43.653 "queue_depth": 1, 00:11:43.653 "io_size": 131072, 00:11:43.653 "runtime": 1.413911, 00:11:43.653 "iops": 14261.152222452474, 00:11:43.653 "mibps": 1782.6440278065593, 00:11:43.653 "io_failed": 1, 00:11:43.653 "io_timeout": 0, 00:11:43.653 "avg_latency_us": 97.03818571024854, 00:11:43.653 "min_latency_us": 27.72401746724891, 00:11:43.653 "max_latency_us": 1638.4 00:11:43.653 } 00:11:43.653 ], 00:11:43.654 "core_count": 1 00:11:43.654 } 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82033 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82033 ']' 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82033 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.654 16:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82033 00:11:43.654 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.654 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.654 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82033' 00:11:43.654 killing process with pid 82033 00:11:43.654 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82033 00:11:43.654 [2024-11-08 16:53:13.006931] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.654 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82033 00:11:43.654 [2024-11-08 16:53:13.043883] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iwimv9UA4k 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:43.913 ************************************ 00:11:43.913 END TEST raid_write_error_test 00:11:43.913 ************************************ 00:11:43.913 00:11:43.913 real 0m3.447s 00:11:43.913 user 0m4.378s 00:11:43.913 sys 0m0.604s 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.913 16:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.913 16:53:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:43.913 16:53:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:43.913 16:53:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:43.913 16:53:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.913 16:53:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.913 ************************************ 00:11:43.913 START TEST raid_state_function_test 00:11:43.913 ************************************ 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.913 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82167 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82167' 00:11:43.914 Process raid pid: 82167 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82167 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82167 ']' 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.914 16:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.173 [2024-11-08 16:53:13.480770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:44.173 [2024-11-08 16:53:13.481502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.173 [2024-11-08 16:53:13.646594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.173 [2024-11-08 16:53:13.696387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.433 [2024-11-08 16:53:13.739550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.433 [2024-11-08 16:53:13.739590] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.002 [2024-11-08 16:53:14.373500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.002 [2024-11-08 16:53:14.373608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.002 [2024-11-08 16:53:14.373659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.002 [2024-11-08 16:53:14.373689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.002 [2024-11-08 16:53:14.373711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.002 [2024-11-08 16:53:14.373780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.002 [2024-11-08 16:53:14.373810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.002 [2024-11-08 16:53:14.373835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.002 "name": "Existed_Raid", 00:11:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.002 "strip_size_kb": 64, 00:11:45.002 "state": "configuring", 00:11:45.002 "raid_level": "concat", 00:11:45.002 "superblock": false, 00:11:45.002 "num_base_bdevs": 4, 00:11:45.002 "num_base_bdevs_discovered": 0, 00:11:45.002 "num_base_bdevs_operational": 4, 00:11:45.002 "base_bdevs_list": [ 00:11:45.002 { 00:11:45.002 "name": "BaseBdev1", 00:11:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.002 "is_configured": false, 00:11:45.002 "data_offset": 0, 00:11:45.002 "data_size": 0 00:11:45.002 }, 00:11:45.002 { 00:11:45.002 "name": "BaseBdev2", 00:11:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.002 "is_configured": false, 00:11:45.002 "data_offset": 0, 00:11:45.002 "data_size": 0 00:11:45.002 }, 00:11:45.002 { 00:11:45.002 "name": "BaseBdev3", 00:11:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.002 "is_configured": false, 00:11:45.002 "data_offset": 0, 00:11:45.002 "data_size": 0 00:11:45.002 }, 00:11:45.002 { 00:11:45.002 "name": "BaseBdev4", 00:11:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.002 "is_configured": false, 00:11:45.002 "data_offset": 0, 00:11:45.002 "data_size": 0 00:11:45.002 } 00:11:45.002 ] 00:11:45.002 }' 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.002 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 [2024-11-08 16:53:14.860595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.573 [2024-11-08 16:53:14.860733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 [2024-11-08 16:53:14.872603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.573 [2024-11-08 16:53:14.872709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.573 [2024-11-08 16:53:14.872748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.573 [2024-11-08 16:53:14.872775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.573 [2024-11-08 16:53:14.872832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.573 [2024-11-08 16:53:14.872867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.573 [2024-11-08 16:53:14.872894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.573 [2024-11-08 16:53:14.872937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 [2024-11-08 16:53:14.894024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.573 BaseBdev1 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.573 [ 00:11:45.573 { 00:11:45.573 "name": "BaseBdev1", 00:11:45.573 "aliases": [ 00:11:45.573 "0f94c3d0-2302-4d67-afcf-a7b7e149c239" 00:11:45.573 ], 00:11:45.573 "product_name": "Malloc disk", 00:11:45.573 "block_size": 512, 00:11:45.573 "num_blocks": 65536, 00:11:45.573 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:45.573 "assigned_rate_limits": { 00:11:45.573 "rw_ios_per_sec": 0, 00:11:45.573 "rw_mbytes_per_sec": 0, 00:11:45.573 "r_mbytes_per_sec": 0, 00:11:45.573 "w_mbytes_per_sec": 0 00:11:45.573 }, 00:11:45.573 "claimed": true, 00:11:45.573 "claim_type": "exclusive_write", 00:11:45.573 "zoned": false, 00:11:45.573 "supported_io_types": { 00:11:45.573 "read": true, 00:11:45.573 "write": true, 00:11:45.573 "unmap": true, 00:11:45.573 "flush": true, 00:11:45.573 "reset": true, 00:11:45.573 "nvme_admin": false, 00:11:45.573 "nvme_io": false, 00:11:45.573 "nvme_io_md": false, 00:11:45.573 "write_zeroes": true, 00:11:45.573 "zcopy": true, 00:11:45.573 "get_zone_info": false, 00:11:45.573 "zone_management": false, 00:11:45.573 "zone_append": false, 00:11:45.573 "compare": false, 00:11:45.573 "compare_and_write": false, 00:11:45.573 "abort": true, 00:11:45.573 "seek_hole": false, 00:11:45.573 "seek_data": false, 00:11:45.573 "copy": true, 00:11:45.573 "nvme_iov_md": false 00:11:45.573 }, 00:11:45.573 "memory_domains": [ 00:11:45.573 { 00:11:45.573 "dma_device_id": "system", 00:11:45.573 "dma_device_type": 1 00:11:45.573 }, 00:11:45.573 { 00:11:45.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.573 "dma_device_type": 2 00:11:45.573 } 00:11:45.573 ], 00:11:45.573 "driver_specific": {} 00:11:45.573 } 00:11:45.573 ] 00:11:45.573 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.574 "name": "Existed_Raid", 00:11:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.574 "strip_size_kb": 64, 00:11:45.574 "state": "configuring", 00:11:45.574 "raid_level": "concat", 00:11:45.574 "superblock": false, 00:11:45.574 "num_base_bdevs": 4, 00:11:45.574 "num_base_bdevs_discovered": 1, 00:11:45.574 "num_base_bdevs_operational": 4, 00:11:45.574 "base_bdevs_list": [ 00:11:45.574 { 00:11:45.574 "name": "BaseBdev1", 00:11:45.574 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:45.574 "is_configured": true, 00:11:45.574 "data_offset": 0, 00:11:45.574 "data_size": 65536 00:11:45.574 }, 00:11:45.574 { 00:11:45.574 "name": "BaseBdev2", 00:11:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.574 "is_configured": false, 00:11:45.574 "data_offset": 0, 00:11:45.574 "data_size": 0 00:11:45.574 }, 00:11:45.574 { 00:11:45.574 "name": "BaseBdev3", 00:11:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.574 "is_configured": false, 00:11:45.574 "data_offset": 0, 00:11:45.574 "data_size": 0 00:11:45.574 }, 00:11:45.574 { 00:11:45.574 "name": "BaseBdev4", 00:11:45.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.574 "is_configured": false, 00:11:45.574 "data_offset": 0, 00:11:45.574 "data_size": 0 00:11:45.574 } 00:11:45.574 ] 00:11:45.574 }' 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.574 16:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 [2024-11-08 16:53:15.421215] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.144 [2024-11-08 16:53:15.421276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 [2024-11-08 16:53:15.433243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.144 [2024-11-08 16:53:15.435348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.144 [2024-11-08 16:53:15.435396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.144 [2024-11-08 16:53:15.435407] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.144 [2024-11-08 16:53:15.435417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.144 [2024-11-08 16:53:15.435424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:46.144 [2024-11-08 16:53:15.435433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.144 "name": "Existed_Raid", 00:11:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.144 "strip_size_kb": 64, 00:11:46.144 "state": "configuring", 00:11:46.144 "raid_level": "concat", 00:11:46.144 "superblock": false, 00:11:46.144 "num_base_bdevs": 4, 00:11:46.144 "num_base_bdevs_discovered": 1, 00:11:46.144 "num_base_bdevs_operational": 4, 00:11:46.144 "base_bdevs_list": [ 00:11:46.144 { 00:11:46.144 "name": "BaseBdev1", 00:11:46.144 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:46.144 "is_configured": true, 00:11:46.144 "data_offset": 0, 00:11:46.144 "data_size": 65536 00:11:46.144 }, 00:11:46.144 { 00:11:46.144 "name": "BaseBdev2", 00:11:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.144 "is_configured": false, 00:11:46.144 "data_offset": 0, 00:11:46.144 "data_size": 0 00:11:46.144 }, 00:11:46.144 { 00:11:46.144 "name": "BaseBdev3", 00:11:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.144 "is_configured": false, 00:11:46.144 "data_offset": 0, 00:11:46.144 "data_size": 0 00:11:46.144 }, 00:11:46.144 { 00:11:46.144 "name": "BaseBdev4", 00:11:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.144 "is_configured": false, 00:11:46.144 "data_offset": 0, 00:11:46.144 "data_size": 0 00:11:46.144 } 00:11:46.144 ] 00:11:46.144 }' 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.144 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.404 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.404 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.404 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.663 [2024-11-08 16:53:15.938761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.663 BaseBdev2 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.663 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.663 [ 00:11:46.663 { 00:11:46.663 "name": "BaseBdev2", 00:11:46.663 "aliases": [ 00:11:46.663 "d6778a46-0693-457f-9b67-4c98a9c7c077" 00:11:46.663 ], 00:11:46.663 "product_name": "Malloc disk", 00:11:46.663 "block_size": 512, 00:11:46.663 "num_blocks": 65536, 00:11:46.663 "uuid": "d6778a46-0693-457f-9b67-4c98a9c7c077", 00:11:46.663 "assigned_rate_limits": { 00:11:46.663 "rw_ios_per_sec": 0, 00:11:46.663 "rw_mbytes_per_sec": 0, 00:11:46.663 "r_mbytes_per_sec": 0, 00:11:46.663 "w_mbytes_per_sec": 0 00:11:46.663 }, 00:11:46.663 "claimed": true, 00:11:46.663 "claim_type": "exclusive_write", 00:11:46.663 "zoned": false, 00:11:46.663 "supported_io_types": { 00:11:46.663 "read": true, 00:11:46.663 "write": true, 00:11:46.663 "unmap": true, 00:11:46.663 "flush": true, 00:11:46.663 "reset": true, 00:11:46.663 "nvme_admin": false, 00:11:46.663 "nvme_io": false, 00:11:46.663 "nvme_io_md": false, 00:11:46.663 "write_zeroes": true, 00:11:46.663 "zcopy": true, 00:11:46.663 "get_zone_info": false, 00:11:46.663 "zone_management": false, 00:11:46.663 "zone_append": false, 00:11:46.663 "compare": false, 00:11:46.663 "compare_and_write": false, 00:11:46.663 "abort": true, 00:11:46.663 "seek_hole": false, 00:11:46.663 "seek_data": false, 00:11:46.663 "copy": true, 00:11:46.663 "nvme_iov_md": false 00:11:46.664 }, 00:11:46.664 "memory_domains": [ 00:11:46.664 { 00:11:46.664 "dma_device_id": "system", 00:11:46.664 "dma_device_type": 1 00:11:46.664 }, 00:11:46.664 { 00:11:46.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.664 "dma_device_type": 2 00:11:46.664 } 00:11:46.664 ], 00:11:46.664 "driver_specific": {} 00:11:46.664 } 00:11:46.664 ] 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.664 16:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.664 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.664 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.664 "name": "Existed_Raid", 00:11:46.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.664 "strip_size_kb": 64, 00:11:46.664 "state": "configuring", 00:11:46.664 "raid_level": "concat", 00:11:46.664 "superblock": false, 00:11:46.664 "num_base_bdevs": 4, 00:11:46.664 "num_base_bdevs_discovered": 2, 00:11:46.664 "num_base_bdevs_operational": 4, 00:11:46.664 "base_bdevs_list": [ 00:11:46.664 { 00:11:46.664 "name": "BaseBdev1", 00:11:46.664 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:46.664 "is_configured": true, 00:11:46.664 "data_offset": 0, 00:11:46.664 "data_size": 65536 00:11:46.664 }, 00:11:46.664 { 00:11:46.664 "name": "BaseBdev2", 00:11:46.664 "uuid": "d6778a46-0693-457f-9b67-4c98a9c7c077", 00:11:46.664 "is_configured": true, 00:11:46.664 "data_offset": 0, 00:11:46.664 "data_size": 65536 00:11:46.664 }, 00:11:46.664 { 00:11:46.664 "name": "BaseBdev3", 00:11:46.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.664 "is_configured": false, 00:11:46.664 "data_offset": 0, 00:11:46.664 "data_size": 0 00:11:46.664 }, 00:11:46.664 { 00:11:46.664 "name": "BaseBdev4", 00:11:46.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.664 "is_configured": false, 00:11:46.664 "data_offset": 0, 00:11:46.664 "data_size": 0 00:11:46.664 } 00:11:46.664 ] 00:11:46.664 }' 00:11:46.664 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.664 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.923 [2024-11-08 16:53:16.441392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.923 BaseBdev3 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.923 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.183 [ 00:11:47.183 { 00:11:47.183 "name": "BaseBdev3", 00:11:47.183 "aliases": [ 00:11:47.183 "ce478e58-5461-4ddf-a98e-efab358037e6" 00:11:47.183 ], 00:11:47.183 "product_name": "Malloc disk", 00:11:47.183 "block_size": 512, 00:11:47.183 "num_blocks": 65536, 00:11:47.183 "uuid": "ce478e58-5461-4ddf-a98e-efab358037e6", 00:11:47.183 "assigned_rate_limits": { 00:11:47.183 "rw_ios_per_sec": 0, 00:11:47.183 "rw_mbytes_per_sec": 0, 00:11:47.183 "r_mbytes_per_sec": 0, 00:11:47.183 "w_mbytes_per_sec": 0 00:11:47.183 }, 00:11:47.183 "claimed": true, 00:11:47.183 "claim_type": "exclusive_write", 00:11:47.183 "zoned": false, 00:11:47.183 "supported_io_types": { 00:11:47.183 "read": true, 00:11:47.183 "write": true, 00:11:47.183 "unmap": true, 00:11:47.183 "flush": true, 00:11:47.183 "reset": true, 00:11:47.183 "nvme_admin": false, 00:11:47.183 "nvme_io": false, 00:11:47.183 "nvme_io_md": false, 00:11:47.183 "write_zeroes": true, 00:11:47.183 "zcopy": true, 00:11:47.183 "get_zone_info": false, 00:11:47.183 "zone_management": false, 00:11:47.183 "zone_append": false, 00:11:47.183 "compare": false, 00:11:47.183 "compare_and_write": false, 00:11:47.183 "abort": true, 00:11:47.183 "seek_hole": false, 00:11:47.183 "seek_data": false, 00:11:47.183 "copy": true, 00:11:47.183 "nvme_iov_md": false 00:11:47.183 }, 00:11:47.183 "memory_domains": [ 00:11:47.183 { 00:11:47.183 "dma_device_id": "system", 00:11:47.183 "dma_device_type": 1 00:11:47.183 }, 00:11:47.183 { 00:11:47.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.183 "dma_device_type": 2 00:11:47.183 } 00:11:47.183 ], 00:11:47.183 "driver_specific": {} 00:11:47.183 } 00:11:47.183 ] 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.183 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.183 "name": "Existed_Raid", 00:11:47.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.183 "strip_size_kb": 64, 00:11:47.183 "state": "configuring", 00:11:47.183 "raid_level": "concat", 00:11:47.183 "superblock": false, 00:11:47.184 "num_base_bdevs": 4, 00:11:47.184 "num_base_bdevs_discovered": 3, 00:11:47.184 "num_base_bdevs_operational": 4, 00:11:47.184 "base_bdevs_list": [ 00:11:47.184 { 00:11:47.184 "name": "BaseBdev1", 00:11:47.184 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:47.184 "is_configured": true, 00:11:47.184 "data_offset": 0, 00:11:47.184 "data_size": 65536 00:11:47.184 }, 00:11:47.184 { 00:11:47.184 "name": "BaseBdev2", 00:11:47.184 "uuid": "d6778a46-0693-457f-9b67-4c98a9c7c077", 00:11:47.184 "is_configured": true, 00:11:47.184 "data_offset": 0, 00:11:47.184 "data_size": 65536 00:11:47.184 }, 00:11:47.184 { 00:11:47.184 "name": "BaseBdev3", 00:11:47.184 "uuid": "ce478e58-5461-4ddf-a98e-efab358037e6", 00:11:47.184 "is_configured": true, 00:11:47.184 "data_offset": 0, 00:11:47.184 "data_size": 65536 00:11:47.184 }, 00:11:47.184 { 00:11:47.184 "name": "BaseBdev4", 00:11:47.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.184 "is_configured": false, 00:11:47.184 "data_offset": 0, 00:11:47.184 "data_size": 0 00:11:47.184 } 00:11:47.184 ] 00:11:47.184 }' 00:11:47.184 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.184 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.443 [2024-11-08 16:53:16.923988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.443 [2024-11-08 16:53:16.924041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:47.443 [2024-11-08 16:53:16.924060] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:47.443 [2024-11-08 16:53:16.924368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:47.443 [2024-11-08 16:53:16.924507] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:47.443 [2024-11-08 16:53:16.924520] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:47.443 [2024-11-08 16:53:16.924796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.443 BaseBdev4 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.443 [ 00:11:47.443 { 00:11:47.443 "name": "BaseBdev4", 00:11:47.443 "aliases": [ 00:11:47.443 "1b56bf09-c37b-4e58-8434-550c0a1a8668" 00:11:47.443 ], 00:11:47.443 "product_name": "Malloc disk", 00:11:47.443 "block_size": 512, 00:11:47.443 "num_blocks": 65536, 00:11:47.443 "uuid": "1b56bf09-c37b-4e58-8434-550c0a1a8668", 00:11:47.443 "assigned_rate_limits": { 00:11:47.443 "rw_ios_per_sec": 0, 00:11:47.443 "rw_mbytes_per_sec": 0, 00:11:47.443 "r_mbytes_per_sec": 0, 00:11:47.443 "w_mbytes_per_sec": 0 00:11:47.443 }, 00:11:47.443 "claimed": true, 00:11:47.443 "claim_type": "exclusive_write", 00:11:47.443 "zoned": false, 00:11:47.443 "supported_io_types": { 00:11:47.443 "read": true, 00:11:47.443 "write": true, 00:11:47.443 "unmap": true, 00:11:47.443 "flush": true, 00:11:47.443 "reset": true, 00:11:47.443 "nvme_admin": false, 00:11:47.443 "nvme_io": false, 00:11:47.443 "nvme_io_md": false, 00:11:47.443 "write_zeroes": true, 00:11:47.443 "zcopy": true, 00:11:47.443 "get_zone_info": false, 00:11:47.443 "zone_management": false, 00:11:47.443 "zone_append": false, 00:11:47.443 "compare": false, 00:11:47.443 "compare_and_write": false, 00:11:47.443 "abort": true, 00:11:47.443 "seek_hole": false, 00:11:47.443 "seek_data": false, 00:11:47.443 "copy": true, 00:11:47.443 "nvme_iov_md": false 00:11:47.443 }, 00:11:47.443 "memory_domains": [ 00:11:47.443 { 00:11:47.443 "dma_device_id": "system", 00:11:47.443 "dma_device_type": 1 00:11:47.443 }, 00:11:47.443 { 00:11:47.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.443 "dma_device_type": 2 00:11:47.443 } 00:11:47.443 ], 00:11:47.443 "driver_specific": {} 00:11:47.443 } 00:11:47.443 ] 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.443 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.444 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.703 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.703 16:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.703 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.703 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.703 16:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.703 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.703 "name": "Existed_Raid", 00:11:47.703 "uuid": "e5595edc-4817-4e09-a25c-105dab7f1842", 00:11:47.703 "strip_size_kb": 64, 00:11:47.703 "state": "online", 00:11:47.703 "raid_level": "concat", 00:11:47.703 "superblock": false, 00:11:47.703 "num_base_bdevs": 4, 00:11:47.703 "num_base_bdevs_discovered": 4, 00:11:47.703 "num_base_bdevs_operational": 4, 00:11:47.703 "base_bdevs_list": [ 00:11:47.703 { 00:11:47.703 "name": "BaseBdev1", 00:11:47.703 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:47.703 "is_configured": true, 00:11:47.703 "data_offset": 0, 00:11:47.703 "data_size": 65536 00:11:47.703 }, 00:11:47.703 { 00:11:47.703 "name": "BaseBdev2", 00:11:47.703 "uuid": "d6778a46-0693-457f-9b67-4c98a9c7c077", 00:11:47.703 "is_configured": true, 00:11:47.703 "data_offset": 0, 00:11:47.703 "data_size": 65536 00:11:47.703 }, 00:11:47.703 { 00:11:47.703 "name": "BaseBdev3", 00:11:47.703 "uuid": "ce478e58-5461-4ddf-a98e-efab358037e6", 00:11:47.703 "is_configured": true, 00:11:47.703 "data_offset": 0, 00:11:47.703 "data_size": 65536 00:11:47.703 }, 00:11:47.703 { 00:11:47.703 "name": "BaseBdev4", 00:11:47.703 "uuid": "1b56bf09-c37b-4e58-8434-550c0a1a8668", 00:11:47.703 "is_configured": true, 00:11:47.703 "data_offset": 0, 00:11:47.703 "data_size": 65536 00:11:47.703 } 00:11:47.703 ] 00:11:47.703 }' 00:11:47.703 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.703 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.962 [2024-11-08 16:53:17.431681] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.962 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.962 "name": "Existed_Raid", 00:11:47.962 "aliases": [ 00:11:47.962 "e5595edc-4817-4e09-a25c-105dab7f1842" 00:11:47.962 ], 00:11:47.962 "product_name": "Raid Volume", 00:11:47.962 "block_size": 512, 00:11:47.962 "num_blocks": 262144, 00:11:47.962 "uuid": "e5595edc-4817-4e09-a25c-105dab7f1842", 00:11:47.962 "assigned_rate_limits": { 00:11:47.962 "rw_ios_per_sec": 0, 00:11:47.962 "rw_mbytes_per_sec": 0, 00:11:47.962 "r_mbytes_per_sec": 0, 00:11:47.962 "w_mbytes_per_sec": 0 00:11:47.962 }, 00:11:47.962 "claimed": false, 00:11:47.962 "zoned": false, 00:11:47.962 "supported_io_types": { 00:11:47.962 "read": true, 00:11:47.962 "write": true, 00:11:47.962 "unmap": true, 00:11:47.963 "flush": true, 00:11:47.963 "reset": true, 00:11:47.963 "nvme_admin": false, 00:11:47.963 "nvme_io": false, 00:11:47.963 "nvme_io_md": false, 00:11:47.963 "write_zeroes": true, 00:11:47.963 "zcopy": false, 00:11:47.963 "get_zone_info": false, 00:11:47.963 "zone_management": false, 00:11:47.963 "zone_append": false, 00:11:47.963 "compare": false, 00:11:47.963 "compare_and_write": false, 00:11:47.963 "abort": false, 00:11:47.963 "seek_hole": false, 00:11:47.963 "seek_data": false, 00:11:47.963 "copy": false, 00:11:47.963 "nvme_iov_md": false 00:11:47.963 }, 00:11:47.963 "memory_domains": [ 00:11:47.963 { 00:11:47.963 "dma_device_id": "system", 00:11:47.963 "dma_device_type": 1 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.963 "dma_device_type": 2 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "system", 00:11:47.963 "dma_device_type": 1 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.963 "dma_device_type": 2 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "system", 00:11:47.963 "dma_device_type": 1 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.963 "dma_device_type": 2 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "system", 00:11:47.963 "dma_device_type": 1 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.963 "dma_device_type": 2 00:11:47.963 } 00:11:47.963 ], 00:11:47.963 "driver_specific": { 00:11:47.963 "raid": { 00:11:47.963 "uuid": "e5595edc-4817-4e09-a25c-105dab7f1842", 00:11:47.963 "strip_size_kb": 64, 00:11:47.963 "state": "online", 00:11:47.963 "raid_level": "concat", 00:11:47.963 "superblock": false, 00:11:47.963 "num_base_bdevs": 4, 00:11:47.963 "num_base_bdevs_discovered": 4, 00:11:47.963 "num_base_bdevs_operational": 4, 00:11:47.963 "base_bdevs_list": [ 00:11:47.963 { 00:11:47.963 "name": "BaseBdev1", 00:11:47.963 "uuid": "0f94c3d0-2302-4d67-afcf-a7b7e149c239", 00:11:47.963 "is_configured": true, 00:11:47.963 "data_offset": 0, 00:11:47.963 "data_size": 65536 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "name": "BaseBdev2", 00:11:47.963 "uuid": "d6778a46-0693-457f-9b67-4c98a9c7c077", 00:11:47.963 "is_configured": true, 00:11:47.963 "data_offset": 0, 00:11:47.963 "data_size": 65536 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "name": "BaseBdev3", 00:11:47.963 "uuid": "ce478e58-5461-4ddf-a98e-efab358037e6", 00:11:47.963 "is_configured": true, 00:11:47.963 "data_offset": 0, 00:11:47.963 "data_size": 65536 00:11:47.963 }, 00:11:47.963 { 00:11:47.963 "name": "BaseBdev4", 00:11:47.963 "uuid": "1b56bf09-c37b-4e58-8434-550c0a1a8668", 00:11:47.963 "is_configured": true, 00:11:47.963 "data_offset": 0, 00:11:47.963 "data_size": 65536 00:11:47.963 } 00:11:47.963 ] 00:11:47.963 } 00:11:47.963 } 00:11:47.963 }' 00:11:47.963 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:48.222 BaseBdev2 00:11:48.222 BaseBdev3 00:11:48.222 BaseBdev4' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.222 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.480 [2024-11-08 16:53:17.782809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.480 [2024-11-08 16:53:17.782893] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.480 [2024-11-08 16:53:17.782981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:48.480 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.481 "name": "Existed_Raid", 00:11:48.481 "uuid": "e5595edc-4817-4e09-a25c-105dab7f1842", 00:11:48.481 "strip_size_kb": 64, 00:11:48.481 "state": "offline", 00:11:48.481 "raid_level": "concat", 00:11:48.481 "superblock": false, 00:11:48.481 "num_base_bdevs": 4, 00:11:48.481 "num_base_bdevs_discovered": 3, 00:11:48.481 "num_base_bdevs_operational": 3, 00:11:48.481 "base_bdevs_list": [ 00:11:48.481 { 00:11:48.481 "name": null, 00:11:48.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.481 "is_configured": false, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "BaseBdev2", 00:11:48.481 "uuid": "d6778a46-0693-457f-9b67-4c98a9c7c077", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "BaseBdev3", 00:11:48.481 "uuid": "ce478e58-5461-4ddf-a98e-efab358037e6", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "BaseBdev4", 00:11:48.481 "uuid": "1b56bf09-c37b-4e58-8434-550c0a1a8668", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 0, 00:11:48.481 "data_size": 65536 00:11:48.481 } 00:11:48.481 ] 00:11:48.481 }' 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.481 16:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 [2024-11-08 16:53:18.357841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 [2024-11-08 16:53:18.421249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 [2024-11-08 16:53:18.492841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:49.050 [2024-11-08 16:53:18.492949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.050 BaseBdev2 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.050 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.310 [ 00:11:49.310 { 00:11:49.310 "name": "BaseBdev2", 00:11:49.310 "aliases": [ 00:11:49.310 "c6e5b05a-661d-4357-a172-3c7ac567bf05" 00:11:49.310 ], 00:11:49.310 "product_name": "Malloc disk", 00:11:49.310 "block_size": 512, 00:11:49.310 "num_blocks": 65536, 00:11:49.310 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:49.310 "assigned_rate_limits": { 00:11:49.310 "rw_ios_per_sec": 0, 00:11:49.310 "rw_mbytes_per_sec": 0, 00:11:49.310 "r_mbytes_per_sec": 0, 00:11:49.310 "w_mbytes_per_sec": 0 00:11:49.310 }, 00:11:49.310 "claimed": false, 00:11:49.310 "zoned": false, 00:11:49.310 "supported_io_types": { 00:11:49.310 "read": true, 00:11:49.310 "write": true, 00:11:49.310 "unmap": true, 00:11:49.310 "flush": true, 00:11:49.310 "reset": true, 00:11:49.310 "nvme_admin": false, 00:11:49.310 "nvme_io": false, 00:11:49.310 "nvme_io_md": false, 00:11:49.310 "write_zeroes": true, 00:11:49.310 "zcopy": true, 00:11:49.310 "get_zone_info": false, 00:11:49.310 "zone_management": false, 00:11:49.310 "zone_append": false, 00:11:49.310 "compare": false, 00:11:49.310 "compare_and_write": false, 00:11:49.310 "abort": true, 00:11:49.310 "seek_hole": false, 00:11:49.310 "seek_data": false, 00:11:49.310 "copy": true, 00:11:49.310 "nvme_iov_md": false 00:11:49.310 }, 00:11:49.310 "memory_domains": [ 00:11:49.310 { 00:11:49.310 "dma_device_id": "system", 00:11:49.310 "dma_device_type": 1 00:11:49.310 }, 00:11:49.310 { 00:11:49.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.310 "dma_device_type": 2 00:11:49.310 } 00:11:49.310 ], 00:11:49.310 "driver_specific": {} 00:11:49.310 } 00:11:49.310 ] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.310 BaseBdev3 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.310 [ 00:11:49.310 { 00:11:49.310 "name": "BaseBdev3", 00:11:49.310 "aliases": [ 00:11:49.310 "f419bdea-7411-468d-b200-d89076a913f4" 00:11:49.310 ], 00:11:49.310 "product_name": "Malloc disk", 00:11:49.310 "block_size": 512, 00:11:49.310 "num_blocks": 65536, 00:11:49.310 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:49.310 "assigned_rate_limits": { 00:11:49.310 "rw_ios_per_sec": 0, 00:11:49.310 "rw_mbytes_per_sec": 0, 00:11:49.310 "r_mbytes_per_sec": 0, 00:11:49.310 "w_mbytes_per_sec": 0 00:11:49.310 }, 00:11:49.310 "claimed": false, 00:11:49.310 "zoned": false, 00:11:49.310 "supported_io_types": { 00:11:49.310 "read": true, 00:11:49.310 "write": true, 00:11:49.310 "unmap": true, 00:11:49.310 "flush": true, 00:11:49.310 "reset": true, 00:11:49.310 "nvme_admin": false, 00:11:49.310 "nvme_io": false, 00:11:49.310 "nvme_io_md": false, 00:11:49.310 "write_zeroes": true, 00:11:49.310 "zcopy": true, 00:11:49.310 "get_zone_info": false, 00:11:49.310 "zone_management": false, 00:11:49.310 "zone_append": false, 00:11:49.310 "compare": false, 00:11:49.310 "compare_and_write": false, 00:11:49.310 "abort": true, 00:11:49.310 "seek_hole": false, 00:11:49.310 "seek_data": false, 00:11:49.310 "copy": true, 00:11:49.310 "nvme_iov_md": false 00:11:49.310 }, 00:11:49.310 "memory_domains": [ 00:11:49.310 { 00:11:49.310 "dma_device_id": "system", 00:11:49.310 "dma_device_type": 1 00:11:49.310 }, 00:11:49.310 { 00:11:49.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.310 "dma_device_type": 2 00:11:49.310 } 00:11:49.310 ], 00:11:49.310 "driver_specific": {} 00:11:49.310 } 00:11:49.310 ] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.310 BaseBdev4 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:49.310 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.311 [ 00:11:49.311 { 00:11:49.311 "name": "BaseBdev4", 00:11:49.311 "aliases": [ 00:11:49.311 "da77a8a0-0cc2-43d5-ac35-05671f70a803" 00:11:49.311 ], 00:11:49.311 "product_name": "Malloc disk", 00:11:49.311 "block_size": 512, 00:11:49.311 "num_blocks": 65536, 00:11:49.311 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:49.311 "assigned_rate_limits": { 00:11:49.311 "rw_ios_per_sec": 0, 00:11:49.311 "rw_mbytes_per_sec": 0, 00:11:49.311 "r_mbytes_per_sec": 0, 00:11:49.311 "w_mbytes_per_sec": 0 00:11:49.311 }, 00:11:49.311 "claimed": false, 00:11:49.311 "zoned": false, 00:11:49.311 "supported_io_types": { 00:11:49.311 "read": true, 00:11:49.311 "write": true, 00:11:49.311 "unmap": true, 00:11:49.311 "flush": true, 00:11:49.311 "reset": true, 00:11:49.311 "nvme_admin": false, 00:11:49.311 "nvme_io": false, 00:11:49.311 "nvme_io_md": false, 00:11:49.311 "write_zeroes": true, 00:11:49.311 "zcopy": true, 00:11:49.311 "get_zone_info": false, 00:11:49.311 "zone_management": false, 00:11:49.311 "zone_append": false, 00:11:49.311 "compare": false, 00:11:49.311 "compare_and_write": false, 00:11:49.311 "abort": true, 00:11:49.311 "seek_hole": false, 00:11:49.311 "seek_data": false, 00:11:49.311 "copy": true, 00:11:49.311 "nvme_iov_md": false 00:11:49.311 }, 00:11:49.311 "memory_domains": [ 00:11:49.311 { 00:11:49.311 "dma_device_id": "system", 00:11:49.311 "dma_device_type": 1 00:11:49.311 }, 00:11:49.311 { 00:11:49.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.311 "dma_device_type": 2 00:11:49.311 } 00:11:49.311 ], 00:11:49.311 "driver_specific": {} 00:11:49.311 } 00:11:49.311 ] 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.311 [2024-11-08 16:53:18.719895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.311 [2024-11-08 16:53:18.719983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.311 [2024-11-08 16:53:18.720031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.311 [2024-11-08 16:53:18.721960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.311 [2024-11-08 16:53:18.722052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.311 "name": "Existed_Raid", 00:11:49.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.311 "strip_size_kb": 64, 00:11:49.311 "state": "configuring", 00:11:49.311 "raid_level": "concat", 00:11:49.311 "superblock": false, 00:11:49.311 "num_base_bdevs": 4, 00:11:49.311 "num_base_bdevs_discovered": 3, 00:11:49.311 "num_base_bdevs_operational": 4, 00:11:49.311 "base_bdevs_list": [ 00:11:49.311 { 00:11:49.311 "name": "BaseBdev1", 00:11:49.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.311 "is_configured": false, 00:11:49.311 "data_offset": 0, 00:11:49.311 "data_size": 0 00:11:49.311 }, 00:11:49.311 { 00:11:49.311 "name": "BaseBdev2", 00:11:49.311 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:49.311 "is_configured": true, 00:11:49.311 "data_offset": 0, 00:11:49.311 "data_size": 65536 00:11:49.311 }, 00:11:49.311 { 00:11:49.311 "name": "BaseBdev3", 00:11:49.311 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:49.311 "is_configured": true, 00:11:49.311 "data_offset": 0, 00:11:49.311 "data_size": 65536 00:11:49.311 }, 00:11:49.311 { 00:11:49.311 "name": "BaseBdev4", 00:11:49.311 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:49.311 "is_configured": true, 00:11:49.311 "data_offset": 0, 00:11:49.311 "data_size": 65536 00:11:49.311 } 00:11:49.311 ] 00:11:49.311 }' 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.311 16:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.879 [2024-11-08 16:53:19.199250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.879 "name": "Existed_Raid", 00:11:49.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.879 "strip_size_kb": 64, 00:11:49.879 "state": "configuring", 00:11:49.879 "raid_level": "concat", 00:11:49.879 "superblock": false, 00:11:49.879 "num_base_bdevs": 4, 00:11:49.879 "num_base_bdevs_discovered": 2, 00:11:49.879 "num_base_bdevs_operational": 4, 00:11:49.879 "base_bdevs_list": [ 00:11:49.879 { 00:11:49.879 "name": "BaseBdev1", 00:11:49.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.879 "is_configured": false, 00:11:49.879 "data_offset": 0, 00:11:49.879 "data_size": 0 00:11:49.879 }, 00:11:49.879 { 00:11:49.879 "name": null, 00:11:49.879 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:49.879 "is_configured": false, 00:11:49.879 "data_offset": 0, 00:11:49.879 "data_size": 65536 00:11:49.879 }, 00:11:49.879 { 00:11:49.879 "name": "BaseBdev3", 00:11:49.879 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:49.879 "is_configured": true, 00:11:49.879 "data_offset": 0, 00:11:49.879 "data_size": 65536 00:11:49.879 }, 00:11:49.879 { 00:11:49.879 "name": "BaseBdev4", 00:11:49.879 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:49.879 "is_configured": true, 00:11:49.879 "data_offset": 0, 00:11:49.879 "data_size": 65536 00:11:49.879 } 00:11:49.879 ] 00:11:49.879 }' 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.879 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.137 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.137 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.137 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.137 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.137 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.396 [2024-11-08 16:53:19.694151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.396 BaseBdev1 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.396 [ 00:11:50.396 { 00:11:50.396 "name": "BaseBdev1", 00:11:50.396 "aliases": [ 00:11:50.396 "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3" 00:11:50.396 ], 00:11:50.396 "product_name": "Malloc disk", 00:11:50.396 "block_size": 512, 00:11:50.396 "num_blocks": 65536, 00:11:50.396 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:50.396 "assigned_rate_limits": { 00:11:50.396 "rw_ios_per_sec": 0, 00:11:50.396 "rw_mbytes_per_sec": 0, 00:11:50.396 "r_mbytes_per_sec": 0, 00:11:50.396 "w_mbytes_per_sec": 0 00:11:50.396 }, 00:11:50.396 "claimed": true, 00:11:50.396 "claim_type": "exclusive_write", 00:11:50.396 "zoned": false, 00:11:50.396 "supported_io_types": { 00:11:50.396 "read": true, 00:11:50.396 "write": true, 00:11:50.396 "unmap": true, 00:11:50.396 "flush": true, 00:11:50.396 "reset": true, 00:11:50.396 "nvme_admin": false, 00:11:50.396 "nvme_io": false, 00:11:50.396 "nvme_io_md": false, 00:11:50.396 "write_zeroes": true, 00:11:50.396 "zcopy": true, 00:11:50.396 "get_zone_info": false, 00:11:50.396 "zone_management": false, 00:11:50.396 "zone_append": false, 00:11:50.396 "compare": false, 00:11:50.396 "compare_and_write": false, 00:11:50.396 "abort": true, 00:11:50.396 "seek_hole": false, 00:11:50.396 "seek_data": false, 00:11:50.396 "copy": true, 00:11:50.396 "nvme_iov_md": false 00:11:50.396 }, 00:11:50.396 "memory_domains": [ 00:11:50.396 { 00:11:50.396 "dma_device_id": "system", 00:11:50.396 "dma_device_type": 1 00:11:50.396 }, 00:11:50.396 { 00:11:50.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.396 "dma_device_type": 2 00:11:50.396 } 00:11:50.396 ], 00:11:50.396 "driver_specific": {} 00:11:50.396 } 00:11:50.396 ] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.396 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.397 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.397 "name": "Existed_Raid", 00:11:50.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.397 "strip_size_kb": 64, 00:11:50.397 "state": "configuring", 00:11:50.397 "raid_level": "concat", 00:11:50.397 "superblock": false, 00:11:50.397 "num_base_bdevs": 4, 00:11:50.397 "num_base_bdevs_discovered": 3, 00:11:50.397 "num_base_bdevs_operational": 4, 00:11:50.397 "base_bdevs_list": [ 00:11:50.397 { 00:11:50.397 "name": "BaseBdev1", 00:11:50.397 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:50.397 "is_configured": true, 00:11:50.397 "data_offset": 0, 00:11:50.397 "data_size": 65536 00:11:50.397 }, 00:11:50.397 { 00:11:50.397 "name": null, 00:11:50.397 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:50.397 "is_configured": false, 00:11:50.397 "data_offset": 0, 00:11:50.397 "data_size": 65536 00:11:50.397 }, 00:11:50.397 { 00:11:50.397 "name": "BaseBdev3", 00:11:50.397 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:50.397 "is_configured": true, 00:11:50.397 "data_offset": 0, 00:11:50.397 "data_size": 65536 00:11:50.397 }, 00:11:50.397 { 00:11:50.397 "name": "BaseBdev4", 00:11:50.397 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:50.397 "is_configured": true, 00:11:50.397 "data_offset": 0, 00:11:50.397 "data_size": 65536 00:11:50.397 } 00:11:50.397 ] 00:11:50.397 }' 00:11:50.397 16:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.397 16:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.656 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.656 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.916 [2024-11-08 16:53:20.233321] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.916 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.917 "name": "Existed_Raid", 00:11:50.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.917 "strip_size_kb": 64, 00:11:50.917 "state": "configuring", 00:11:50.917 "raid_level": "concat", 00:11:50.917 "superblock": false, 00:11:50.917 "num_base_bdevs": 4, 00:11:50.917 "num_base_bdevs_discovered": 2, 00:11:50.917 "num_base_bdevs_operational": 4, 00:11:50.917 "base_bdevs_list": [ 00:11:50.917 { 00:11:50.917 "name": "BaseBdev1", 00:11:50.917 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:50.917 "is_configured": true, 00:11:50.917 "data_offset": 0, 00:11:50.917 "data_size": 65536 00:11:50.917 }, 00:11:50.917 { 00:11:50.917 "name": null, 00:11:50.917 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:50.917 "is_configured": false, 00:11:50.917 "data_offset": 0, 00:11:50.917 "data_size": 65536 00:11:50.917 }, 00:11:50.917 { 00:11:50.917 "name": null, 00:11:50.917 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:50.917 "is_configured": false, 00:11:50.917 "data_offset": 0, 00:11:50.917 "data_size": 65536 00:11:50.917 }, 00:11:50.917 { 00:11:50.917 "name": "BaseBdev4", 00:11:50.917 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:50.917 "is_configured": true, 00:11:50.917 "data_offset": 0, 00:11:50.917 "data_size": 65536 00:11:50.917 } 00:11:50.917 ] 00:11:50.917 }' 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.917 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.176 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.176 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.176 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.176 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.435 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.435 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:51.435 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:51.435 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.436 [2024-11-08 16:53:20.724538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.436 "name": "Existed_Raid", 00:11:51.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.436 "strip_size_kb": 64, 00:11:51.436 "state": "configuring", 00:11:51.436 "raid_level": "concat", 00:11:51.436 "superblock": false, 00:11:51.436 "num_base_bdevs": 4, 00:11:51.436 "num_base_bdevs_discovered": 3, 00:11:51.436 "num_base_bdevs_operational": 4, 00:11:51.436 "base_bdevs_list": [ 00:11:51.436 { 00:11:51.436 "name": "BaseBdev1", 00:11:51.436 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:51.436 "is_configured": true, 00:11:51.436 "data_offset": 0, 00:11:51.436 "data_size": 65536 00:11:51.436 }, 00:11:51.436 { 00:11:51.436 "name": null, 00:11:51.436 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:51.436 "is_configured": false, 00:11:51.436 "data_offset": 0, 00:11:51.436 "data_size": 65536 00:11:51.436 }, 00:11:51.436 { 00:11:51.436 "name": "BaseBdev3", 00:11:51.436 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:51.436 "is_configured": true, 00:11:51.436 "data_offset": 0, 00:11:51.436 "data_size": 65536 00:11:51.436 }, 00:11:51.436 { 00:11:51.436 "name": "BaseBdev4", 00:11:51.436 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:51.436 "is_configured": true, 00:11:51.436 "data_offset": 0, 00:11:51.436 "data_size": 65536 00:11:51.436 } 00:11:51.436 ] 00:11:51.436 }' 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.436 16:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.695 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.695 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.695 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.695 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 [2024-11-08 16:53:21.263682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.954 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.954 "name": "Existed_Raid", 00:11:51.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.954 "strip_size_kb": 64, 00:11:51.954 "state": "configuring", 00:11:51.954 "raid_level": "concat", 00:11:51.954 "superblock": false, 00:11:51.954 "num_base_bdevs": 4, 00:11:51.954 "num_base_bdevs_discovered": 2, 00:11:51.954 "num_base_bdevs_operational": 4, 00:11:51.954 "base_bdevs_list": [ 00:11:51.954 { 00:11:51.954 "name": null, 00:11:51.954 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:51.954 "is_configured": false, 00:11:51.954 "data_offset": 0, 00:11:51.954 "data_size": 65536 00:11:51.954 }, 00:11:51.954 { 00:11:51.954 "name": null, 00:11:51.955 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:51.955 "is_configured": false, 00:11:51.955 "data_offset": 0, 00:11:51.955 "data_size": 65536 00:11:51.955 }, 00:11:51.955 { 00:11:51.955 "name": "BaseBdev3", 00:11:51.955 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:51.955 "is_configured": true, 00:11:51.955 "data_offset": 0, 00:11:51.955 "data_size": 65536 00:11:51.955 }, 00:11:51.955 { 00:11:51.955 "name": "BaseBdev4", 00:11:51.955 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:51.955 "is_configured": true, 00:11:51.955 "data_offset": 0, 00:11:51.955 "data_size": 65536 00:11:51.955 } 00:11:51.955 ] 00:11:51.955 }' 00:11:51.955 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.955 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.213 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.213 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.213 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.213 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.473 [2024-11-08 16:53:21.781731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.473 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.473 "name": "Existed_Raid", 00:11:52.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.473 "strip_size_kb": 64, 00:11:52.473 "state": "configuring", 00:11:52.473 "raid_level": "concat", 00:11:52.473 "superblock": false, 00:11:52.473 "num_base_bdevs": 4, 00:11:52.473 "num_base_bdevs_discovered": 3, 00:11:52.473 "num_base_bdevs_operational": 4, 00:11:52.474 "base_bdevs_list": [ 00:11:52.474 { 00:11:52.474 "name": null, 00:11:52.474 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:52.474 "is_configured": false, 00:11:52.474 "data_offset": 0, 00:11:52.474 "data_size": 65536 00:11:52.474 }, 00:11:52.474 { 00:11:52.474 "name": "BaseBdev2", 00:11:52.474 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:52.474 "is_configured": true, 00:11:52.474 "data_offset": 0, 00:11:52.474 "data_size": 65536 00:11:52.474 }, 00:11:52.474 { 00:11:52.474 "name": "BaseBdev3", 00:11:52.474 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:52.474 "is_configured": true, 00:11:52.474 "data_offset": 0, 00:11:52.474 "data_size": 65536 00:11:52.474 }, 00:11:52.474 { 00:11:52.474 "name": "BaseBdev4", 00:11:52.474 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:52.474 "is_configured": true, 00:11:52.474 "data_offset": 0, 00:11:52.474 "data_size": 65536 00:11:52.474 } 00:11:52.474 ] 00:11:52.474 }' 00:11:52.474 16:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.474 16:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.732 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.732 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.732 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.732 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.732 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b1902e66-5b2a-48a1-ac82-d8d2c3d859d3 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.992 [2024-11-08 16:53:22.311960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.992 [2024-11-08 16:53:22.312091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:52.992 [2024-11-08 16:53:22.312117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:52.992 [2024-11-08 16:53:22.312436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:52.992 [2024-11-08 16:53:22.312614] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:52.992 [2024-11-08 16:53:22.312682] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:52.992 [2024-11-08 16:53:22.312940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.992 NewBaseBdev 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.992 [ 00:11:52.992 { 00:11:52.992 "name": "NewBaseBdev", 00:11:52.992 "aliases": [ 00:11:52.992 "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3" 00:11:52.992 ], 00:11:52.992 "product_name": "Malloc disk", 00:11:52.992 "block_size": 512, 00:11:52.992 "num_blocks": 65536, 00:11:52.992 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:52.992 "assigned_rate_limits": { 00:11:52.992 "rw_ios_per_sec": 0, 00:11:52.992 "rw_mbytes_per_sec": 0, 00:11:52.992 "r_mbytes_per_sec": 0, 00:11:52.992 "w_mbytes_per_sec": 0 00:11:52.992 }, 00:11:52.992 "claimed": true, 00:11:52.992 "claim_type": "exclusive_write", 00:11:52.992 "zoned": false, 00:11:52.992 "supported_io_types": { 00:11:52.992 "read": true, 00:11:52.992 "write": true, 00:11:52.992 "unmap": true, 00:11:52.992 "flush": true, 00:11:52.992 "reset": true, 00:11:52.992 "nvme_admin": false, 00:11:52.992 "nvme_io": false, 00:11:52.992 "nvme_io_md": false, 00:11:52.992 "write_zeroes": true, 00:11:52.992 "zcopy": true, 00:11:52.992 "get_zone_info": false, 00:11:52.992 "zone_management": false, 00:11:52.992 "zone_append": false, 00:11:52.992 "compare": false, 00:11:52.992 "compare_and_write": false, 00:11:52.992 "abort": true, 00:11:52.992 "seek_hole": false, 00:11:52.992 "seek_data": false, 00:11:52.992 "copy": true, 00:11:52.992 "nvme_iov_md": false 00:11:52.992 }, 00:11:52.992 "memory_domains": [ 00:11:52.992 { 00:11:52.992 "dma_device_id": "system", 00:11:52.992 "dma_device_type": 1 00:11:52.992 }, 00:11:52.992 { 00:11:52.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.992 "dma_device_type": 2 00:11:52.992 } 00:11:52.992 ], 00:11:52.992 "driver_specific": {} 00:11:52.992 } 00:11:52.992 ] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.992 "name": "Existed_Raid", 00:11:52.992 "uuid": "1ff9bc82-3189-42e9-ab90-2653abe360db", 00:11:52.992 "strip_size_kb": 64, 00:11:52.992 "state": "online", 00:11:52.992 "raid_level": "concat", 00:11:52.992 "superblock": false, 00:11:52.992 "num_base_bdevs": 4, 00:11:52.992 "num_base_bdevs_discovered": 4, 00:11:52.992 "num_base_bdevs_operational": 4, 00:11:52.992 "base_bdevs_list": [ 00:11:52.992 { 00:11:52.992 "name": "NewBaseBdev", 00:11:52.992 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:52.992 "is_configured": true, 00:11:52.992 "data_offset": 0, 00:11:52.992 "data_size": 65536 00:11:52.992 }, 00:11:52.992 { 00:11:52.992 "name": "BaseBdev2", 00:11:52.992 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:52.992 "is_configured": true, 00:11:52.992 "data_offset": 0, 00:11:52.992 "data_size": 65536 00:11:52.992 }, 00:11:52.992 { 00:11:52.992 "name": "BaseBdev3", 00:11:52.992 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:52.992 "is_configured": true, 00:11:52.992 "data_offset": 0, 00:11:52.992 "data_size": 65536 00:11:52.992 }, 00:11:52.992 { 00:11:52.992 "name": "BaseBdev4", 00:11:52.992 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:52.992 "is_configured": true, 00:11:52.992 "data_offset": 0, 00:11:52.992 "data_size": 65536 00:11:52.992 } 00:11:52.992 ] 00:11:52.992 }' 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.992 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.561 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.561 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.561 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.561 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.561 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.562 [2024-11-08 16:53:22.851477] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.562 "name": "Existed_Raid", 00:11:53.562 "aliases": [ 00:11:53.562 "1ff9bc82-3189-42e9-ab90-2653abe360db" 00:11:53.562 ], 00:11:53.562 "product_name": "Raid Volume", 00:11:53.562 "block_size": 512, 00:11:53.562 "num_blocks": 262144, 00:11:53.562 "uuid": "1ff9bc82-3189-42e9-ab90-2653abe360db", 00:11:53.562 "assigned_rate_limits": { 00:11:53.562 "rw_ios_per_sec": 0, 00:11:53.562 "rw_mbytes_per_sec": 0, 00:11:53.562 "r_mbytes_per_sec": 0, 00:11:53.562 "w_mbytes_per_sec": 0 00:11:53.562 }, 00:11:53.562 "claimed": false, 00:11:53.562 "zoned": false, 00:11:53.562 "supported_io_types": { 00:11:53.562 "read": true, 00:11:53.562 "write": true, 00:11:53.562 "unmap": true, 00:11:53.562 "flush": true, 00:11:53.562 "reset": true, 00:11:53.562 "nvme_admin": false, 00:11:53.562 "nvme_io": false, 00:11:53.562 "nvme_io_md": false, 00:11:53.562 "write_zeroes": true, 00:11:53.562 "zcopy": false, 00:11:53.562 "get_zone_info": false, 00:11:53.562 "zone_management": false, 00:11:53.562 "zone_append": false, 00:11:53.562 "compare": false, 00:11:53.562 "compare_and_write": false, 00:11:53.562 "abort": false, 00:11:53.562 "seek_hole": false, 00:11:53.562 "seek_data": false, 00:11:53.562 "copy": false, 00:11:53.562 "nvme_iov_md": false 00:11:53.562 }, 00:11:53.562 "memory_domains": [ 00:11:53.562 { 00:11:53.562 "dma_device_id": "system", 00:11:53.562 "dma_device_type": 1 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.562 "dma_device_type": 2 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "system", 00:11:53.562 "dma_device_type": 1 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.562 "dma_device_type": 2 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "system", 00:11:53.562 "dma_device_type": 1 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.562 "dma_device_type": 2 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "system", 00:11:53.562 "dma_device_type": 1 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.562 "dma_device_type": 2 00:11:53.562 } 00:11:53.562 ], 00:11:53.562 "driver_specific": { 00:11:53.562 "raid": { 00:11:53.562 "uuid": "1ff9bc82-3189-42e9-ab90-2653abe360db", 00:11:53.562 "strip_size_kb": 64, 00:11:53.562 "state": "online", 00:11:53.562 "raid_level": "concat", 00:11:53.562 "superblock": false, 00:11:53.562 "num_base_bdevs": 4, 00:11:53.562 "num_base_bdevs_discovered": 4, 00:11:53.562 "num_base_bdevs_operational": 4, 00:11:53.562 "base_bdevs_list": [ 00:11:53.562 { 00:11:53.562 "name": "NewBaseBdev", 00:11:53.562 "uuid": "b1902e66-5b2a-48a1-ac82-d8d2c3d859d3", 00:11:53.562 "is_configured": true, 00:11:53.562 "data_offset": 0, 00:11:53.562 "data_size": 65536 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "name": "BaseBdev2", 00:11:53.562 "uuid": "c6e5b05a-661d-4357-a172-3c7ac567bf05", 00:11:53.562 "is_configured": true, 00:11:53.562 "data_offset": 0, 00:11:53.562 "data_size": 65536 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "name": "BaseBdev3", 00:11:53.562 "uuid": "f419bdea-7411-468d-b200-d89076a913f4", 00:11:53.562 "is_configured": true, 00:11:53.562 "data_offset": 0, 00:11:53.562 "data_size": 65536 00:11:53.562 }, 00:11:53.562 { 00:11:53.562 "name": "BaseBdev4", 00:11:53.562 "uuid": "da77a8a0-0cc2-43d5-ac35-05671f70a803", 00:11:53.562 "is_configured": true, 00:11:53.562 "data_offset": 0, 00:11:53.562 "data_size": 65536 00:11:53.562 } 00:11:53.562 ] 00:11:53.562 } 00:11:53.562 } 00:11:53.562 }' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:53.562 BaseBdev2 00:11:53.562 BaseBdev3 00:11:53.562 BaseBdev4' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.562 16:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.562 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 [2024-11-08 16:53:23.190522] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.822 [2024-11-08 16:53:23.190613] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.822 [2024-11-08 16:53:23.190771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.822 [2024-11-08 16:53:23.190875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.822 [2024-11-08 16:53:23.190923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82167 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82167 ']' 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82167 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82167 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82167' 00:11:53.822 killing process with pid 82167 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82167 00:11:53.822 [2024-11-08 16:53:23.240774] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.822 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82167 00:11:53.822 [2024-11-08 16:53:23.282787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:54.082 00:11:54.082 real 0m10.154s 00:11:54.082 user 0m17.467s 00:11:54.082 sys 0m2.102s 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.082 ************************************ 00:11:54.082 END TEST raid_state_function_test 00:11:54.082 ************************************ 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.082 16:53:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:54.082 16:53:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:54.082 16:53:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.082 16:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.082 ************************************ 00:11:54.082 START TEST raid_state_function_test_sb 00:11:54.082 ************************************ 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:54.082 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:54.341 Process raid pid: 82821 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82821 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82821' 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82821 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82821 ']' 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.341 16:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.341 [2024-11-08 16:53:23.687325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:54.341 [2024-11-08 16:53:23.687518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.341 [2024-11-08 16:53:23.837244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.602 [2024-11-08 16:53:23.884873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.602 [2024-11-08 16:53:23.926650] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.602 [2024-11-08 16:53:23.926770] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.170 [2024-11-08 16:53:24.544027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.170 [2024-11-08 16:53:24.544139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.170 [2024-11-08 16:53:24.544174] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.170 [2024-11-08 16:53:24.544199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.170 [2024-11-08 16:53:24.544218] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.170 [2024-11-08 16:53:24.544245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.170 [2024-11-08 16:53:24.544264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.170 [2024-11-08 16:53:24.544317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.170 "name": "Existed_Raid", 00:11:55.170 "uuid": "ae0ec78e-887a-47eb-9f61-6d7c33edd574", 00:11:55.170 "strip_size_kb": 64, 00:11:55.170 "state": "configuring", 00:11:55.170 "raid_level": "concat", 00:11:55.170 "superblock": true, 00:11:55.170 "num_base_bdevs": 4, 00:11:55.170 "num_base_bdevs_discovered": 0, 00:11:55.170 "num_base_bdevs_operational": 4, 00:11:55.170 "base_bdevs_list": [ 00:11:55.170 { 00:11:55.170 "name": "BaseBdev1", 00:11:55.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.170 "is_configured": false, 00:11:55.170 "data_offset": 0, 00:11:55.170 "data_size": 0 00:11:55.170 }, 00:11:55.170 { 00:11:55.170 "name": "BaseBdev2", 00:11:55.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.170 "is_configured": false, 00:11:55.170 "data_offset": 0, 00:11:55.170 "data_size": 0 00:11:55.170 }, 00:11:55.170 { 00:11:55.170 "name": "BaseBdev3", 00:11:55.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.170 "is_configured": false, 00:11:55.170 "data_offset": 0, 00:11:55.170 "data_size": 0 00:11:55.170 }, 00:11:55.170 { 00:11:55.170 "name": "BaseBdev4", 00:11:55.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.170 "is_configured": false, 00:11:55.170 "data_offset": 0, 00:11:55.170 "data_size": 0 00:11:55.170 } 00:11:55.170 ] 00:11:55.170 }' 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.170 16:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.737 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.737 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.737 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.737 [2024-11-08 16:53:25.031135] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.737 [2024-11-08 16:53:25.031185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:55.737 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 [2024-11-08 16:53:25.043160] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.738 [2024-11-08 16:53:25.043256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.738 [2024-11-08 16:53:25.043286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.738 [2024-11-08 16:53:25.043311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.738 [2024-11-08 16:53:25.043329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.738 [2024-11-08 16:53:25.043351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.738 [2024-11-08 16:53:25.043369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.738 [2024-11-08 16:53:25.043418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 [2024-11-08 16:53:25.064506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.738 BaseBdev1 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 [ 00:11:55.738 { 00:11:55.738 "name": "BaseBdev1", 00:11:55.738 "aliases": [ 00:11:55.738 "19a6ea66-c2aa-49e2-9659-79d1b46ec95b" 00:11:55.738 ], 00:11:55.738 "product_name": "Malloc disk", 00:11:55.738 "block_size": 512, 00:11:55.738 "num_blocks": 65536, 00:11:55.738 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:55.738 "assigned_rate_limits": { 00:11:55.738 "rw_ios_per_sec": 0, 00:11:55.738 "rw_mbytes_per_sec": 0, 00:11:55.738 "r_mbytes_per_sec": 0, 00:11:55.738 "w_mbytes_per_sec": 0 00:11:55.738 }, 00:11:55.738 "claimed": true, 00:11:55.738 "claim_type": "exclusive_write", 00:11:55.738 "zoned": false, 00:11:55.738 "supported_io_types": { 00:11:55.738 "read": true, 00:11:55.738 "write": true, 00:11:55.738 "unmap": true, 00:11:55.738 "flush": true, 00:11:55.738 "reset": true, 00:11:55.738 "nvme_admin": false, 00:11:55.738 "nvme_io": false, 00:11:55.738 "nvme_io_md": false, 00:11:55.738 "write_zeroes": true, 00:11:55.738 "zcopy": true, 00:11:55.738 "get_zone_info": false, 00:11:55.738 "zone_management": false, 00:11:55.738 "zone_append": false, 00:11:55.738 "compare": false, 00:11:55.738 "compare_and_write": false, 00:11:55.738 "abort": true, 00:11:55.738 "seek_hole": false, 00:11:55.738 "seek_data": false, 00:11:55.738 "copy": true, 00:11:55.738 "nvme_iov_md": false 00:11:55.738 }, 00:11:55.738 "memory_domains": [ 00:11:55.738 { 00:11:55.738 "dma_device_id": "system", 00:11:55.738 "dma_device_type": 1 00:11:55.738 }, 00:11:55.738 { 00:11:55.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.738 "dma_device_type": 2 00:11:55.738 } 00:11:55.738 ], 00:11:55.738 "driver_specific": {} 00:11:55.738 } 00:11:55.738 ] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.738 "name": "Existed_Raid", 00:11:55.738 "uuid": "b59b9192-b9ed-4914-9852-6483d943230c", 00:11:55.738 "strip_size_kb": 64, 00:11:55.738 "state": "configuring", 00:11:55.738 "raid_level": "concat", 00:11:55.738 "superblock": true, 00:11:55.738 "num_base_bdevs": 4, 00:11:55.738 "num_base_bdevs_discovered": 1, 00:11:55.738 "num_base_bdevs_operational": 4, 00:11:55.738 "base_bdevs_list": [ 00:11:55.738 { 00:11:55.738 "name": "BaseBdev1", 00:11:55.738 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:55.738 "is_configured": true, 00:11:55.738 "data_offset": 2048, 00:11:55.738 "data_size": 63488 00:11:55.738 }, 00:11:55.738 { 00:11:55.738 "name": "BaseBdev2", 00:11:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.738 "is_configured": false, 00:11:55.738 "data_offset": 0, 00:11:55.738 "data_size": 0 00:11:55.738 }, 00:11:55.738 { 00:11:55.738 "name": "BaseBdev3", 00:11:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.738 "is_configured": false, 00:11:55.738 "data_offset": 0, 00:11:55.738 "data_size": 0 00:11:55.738 }, 00:11:55.738 { 00:11:55.738 "name": "BaseBdev4", 00:11:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.738 "is_configured": false, 00:11:55.738 "data_offset": 0, 00:11:55.738 "data_size": 0 00:11:55.738 } 00:11:55.738 ] 00:11:55.738 }' 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.738 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.307 [2024-11-08 16:53:25.595709] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.307 [2024-11-08 16:53:25.595772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.307 [2024-11-08 16:53:25.607765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.307 [2024-11-08 16:53:25.609926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.307 [2024-11-08 16:53:25.610016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.307 [2024-11-08 16:53:25.610049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.307 [2024-11-08 16:53:25.610073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.307 [2024-11-08 16:53:25.610138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.307 [2024-11-08 16:53:25.610164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.307 "name": "Existed_Raid", 00:11:56.307 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:56.307 "strip_size_kb": 64, 00:11:56.307 "state": "configuring", 00:11:56.307 "raid_level": "concat", 00:11:56.307 "superblock": true, 00:11:56.307 "num_base_bdevs": 4, 00:11:56.307 "num_base_bdevs_discovered": 1, 00:11:56.307 "num_base_bdevs_operational": 4, 00:11:56.307 "base_bdevs_list": [ 00:11:56.307 { 00:11:56.307 "name": "BaseBdev1", 00:11:56.307 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:56.307 "is_configured": true, 00:11:56.307 "data_offset": 2048, 00:11:56.307 "data_size": 63488 00:11:56.307 }, 00:11:56.307 { 00:11:56.307 "name": "BaseBdev2", 00:11:56.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.307 "is_configured": false, 00:11:56.307 "data_offset": 0, 00:11:56.307 "data_size": 0 00:11:56.307 }, 00:11:56.307 { 00:11:56.307 "name": "BaseBdev3", 00:11:56.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.307 "is_configured": false, 00:11:56.307 "data_offset": 0, 00:11:56.307 "data_size": 0 00:11:56.307 }, 00:11:56.307 { 00:11:56.307 "name": "BaseBdev4", 00:11:56.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.307 "is_configured": false, 00:11:56.307 "data_offset": 0, 00:11:56.307 "data_size": 0 00:11:56.307 } 00:11:56.307 ] 00:11:56.307 }' 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.307 16:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.566 [2024-11-08 16:53:26.028975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.566 BaseBdev2 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.566 [ 00:11:56.566 { 00:11:56.566 "name": "BaseBdev2", 00:11:56.566 "aliases": [ 00:11:56.566 "fd41595c-48ef-4a64-9166-166c5bc3579b" 00:11:56.566 ], 00:11:56.566 "product_name": "Malloc disk", 00:11:56.566 "block_size": 512, 00:11:56.566 "num_blocks": 65536, 00:11:56.566 "uuid": "fd41595c-48ef-4a64-9166-166c5bc3579b", 00:11:56.566 "assigned_rate_limits": { 00:11:56.566 "rw_ios_per_sec": 0, 00:11:56.566 "rw_mbytes_per_sec": 0, 00:11:56.566 "r_mbytes_per_sec": 0, 00:11:56.566 "w_mbytes_per_sec": 0 00:11:56.566 }, 00:11:56.566 "claimed": true, 00:11:56.566 "claim_type": "exclusive_write", 00:11:56.566 "zoned": false, 00:11:56.566 "supported_io_types": { 00:11:56.566 "read": true, 00:11:56.566 "write": true, 00:11:56.566 "unmap": true, 00:11:56.566 "flush": true, 00:11:56.566 "reset": true, 00:11:56.566 "nvme_admin": false, 00:11:56.566 "nvme_io": false, 00:11:56.566 "nvme_io_md": false, 00:11:56.566 "write_zeroes": true, 00:11:56.566 "zcopy": true, 00:11:56.566 "get_zone_info": false, 00:11:56.566 "zone_management": false, 00:11:56.566 "zone_append": false, 00:11:56.566 "compare": false, 00:11:56.566 "compare_and_write": false, 00:11:56.566 "abort": true, 00:11:56.566 "seek_hole": false, 00:11:56.566 "seek_data": false, 00:11:56.566 "copy": true, 00:11:56.566 "nvme_iov_md": false 00:11:56.566 }, 00:11:56.566 "memory_domains": [ 00:11:56.566 { 00:11:56.566 "dma_device_id": "system", 00:11:56.566 "dma_device_type": 1 00:11:56.566 }, 00:11:56.566 { 00:11:56.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.566 "dma_device_type": 2 00:11:56.566 } 00:11:56.566 ], 00:11:56.566 "driver_specific": {} 00:11:56.566 } 00:11:56.566 ] 00:11:56.566 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.567 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.826 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.826 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.826 "name": "Existed_Raid", 00:11:56.826 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:56.826 "strip_size_kb": 64, 00:11:56.826 "state": "configuring", 00:11:56.826 "raid_level": "concat", 00:11:56.826 "superblock": true, 00:11:56.826 "num_base_bdevs": 4, 00:11:56.826 "num_base_bdevs_discovered": 2, 00:11:56.826 "num_base_bdevs_operational": 4, 00:11:56.826 "base_bdevs_list": [ 00:11:56.826 { 00:11:56.826 "name": "BaseBdev1", 00:11:56.826 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:56.826 "is_configured": true, 00:11:56.826 "data_offset": 2048, 00:11:56.826 "data_size": 63488 00:11:56.826 }, 00:11:56.826 { 00:11:56.826 "name": "BaseBdev2", 00:11:56.826 "uuid": "fd41595c-48ef-4a64-9166-166c5bc3579b", 00:11:56.826 "is_configured": true, 00:11:56.826 "data_offset": 2048, 00:11:56.826 "data_size": 63488 00:11:56.826 }, 00:11:56.826 { 00:11:56.826 "name": "BaseBdev3", 00:11:56.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.826 "is_configured": false, 00:11:56.826 "data_offset": 0, 00:11:56.826 "data_size": 0 00:11:56.826 }, 00:11:56.826 { 00:11:56.826 "name": "BaseBdev4", 00:11:56.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.826 "is_configured": false, 00:11:56.826 "data_offset": 0, 00:11:56.826 "data_size": 0 00:11:56.826 } 00:11:56.826 ] 00:11:56.826 }' 00:11:56.826 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.826 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 [2024-11-08 16:53:26.555252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.086 BaseBdev3 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 [ 00:11:57.086 { 00:11:57.086 "name": "BaseBdev3", 00:11:57.086 "aliases": [ 00:11:57.086 "4eb8189e-8b58-4bc5-b6c6-1a8f006d8c20" 00:11:57.086 ], 00:11:57.086 "product_name": "Malloc disk", 00:11:57.086 "block_size": 512, 00:11:57.086 "num_blocks": 65536, 00:11:57.086 "uuid": "4eb8189e-8b58-4bc5-b6c6-1a8f006d8c20", 00:11:57.086 "assigned_rate_limits": { 00:11:57.086 "rw_ios_per_sec": 0, 00:11:57.086 "rw_mbytes_per_sec": 0, 00:11:57.086 "r_mbytes_per_sec": 0, 00:11:57.086 "w_mbytes_per_sec": 0 00:11:57.086 }, 00:11:57.086 "claimed": true, 00:11:57.086 "claim_type": "exclusive_write", 00:11:57.086 "zoned": false, 00:11:57.086 "supported_io_types": { 00:11:57.086 "read": true, 00:11:57.086 "write": true, 00:11:57.086 "unmap": true, 00:11:57.086 "flush": true, 00:11:57.086 "reset": true, 00:11:57.086 "nvme_admin": false, 00:11:57.086 "nvme_io": false, 00:11:57.086 "nvme_io_md": false, 00:11:57.086 "write_zeroes": true, 00:11:57.086 "zcopy": true, 00:11:57.086 "get_zone_info": false, 00:11:57.086 "zone_management": false, 00:11:57.086 "zone_append": false, 00:11:57.086 "compare": false, 00:11:57.086 "compare_and_write": false, 00:11:57.086 "abort": true, 00:11:57.086 "seek_hole": false, 00:11:57.086 "seek_data": false, 00:11:57.086 "copy": true, 00:11:57.086 "nvme_iov_md": false 00:11:57.086 }, 00:11:57.086 "memory_domains": [ 00:11:57.086 { 00:11:57.086 "dma_device_id": "system", 00:11:57.086 "dma_device_type": 1 00:11:57.086 }, 00:11:57.086 { 00:11:57.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.086 "dma_device_type": 2 00:11:57.086 } 00:11:57.086 ], 00:11:57.086 "driver_specific": {} 00:11:57.086 } 00:11:57.086 ] 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.346 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.346 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.346 "name": "Existed_Raid", 00:11:57.346 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:57.346 "strip_size_kb": 64, 00:11:57.346 "state": "configuring", 00:11:57.346 "raid_level": "concat", 00:11:57.346 "superblock": true, 00:11:57.346 "num_base_bdevs": 4, 00:11:57.346 "num_base_bdevs_discovered": 3, 00:11:57.346 "num_base_bdevs_operational": 4, 00:11:57.346 "base_bdevs_list": [ 00:11:57.346 { 00:11:57.346 "name": "BaseBdev1", 00:11:57.346 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:57.346 "is_configured": true, 00:11:57.346 "data_offset": 2048, 00:11:57.346 "data_size": 63488 00:11:57.346 }, 00:11:57.346 { 00:11:57.346 "name": "BaseBdev2", 00:11:57.346 "uuid": "fd41595c-48ef-4a64-9166-166c5bc3579b", 00:11:57.346 "is_configured": true, 00:11:57.346 "data_offset": 2048, 00:11:57.346 "data_size": 63488 00:11:57.346 }, 00:11:57.346 { 00:11:57.346 "name": "BaseBdev3", 00:11:57.346 "uuid": "4eb8189e-8b58-4bc5-b6c6-1a8f006d8c20", 00:11:57.346 "is_configured": true, 00:11:57.346 "data_offset": 2048, 00:11:57.346 "data_size": 63488 00:11:57.346 }, 00:11:57.346 { 00:11:57.346 "name": "BaseBdev4", 00:11:57.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.346 "is_configured": false, 00:11:57.346 "data_offset": 0, 00:11:57.346 "data_size": 0 00:11:57.346 } 00:11:57.346 ] 00:11:57.346 }' 00:11:57.346 16:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.346 16:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.605 [2024-11-08 16:53:27.025668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.605 [2024-11-08 16:53:27.025984] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:57.605 [2024-11-08 16:53:27.026048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:57.605 [2024-11-08 16:53:27.026431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:57.605 BaseBdev4 00:11:57.605 [2024-11-08 16:53:27.026626] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:57.605 [2024-11-08 16:53:27.026712] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:57.605 [2024-11-08 16:53:27.026900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.605 [ 00:11:57.605 { 00:11:57.605 "name": "BaseBdev4", 00:11:57.605 "aliases": [ 00:11:57.605 "884110e8-4a20-4a97-aa25-54a6c2cfef63" 00:11:57.605 ], 00:11:57.605 "product_name": "Malloc disk", 00:11:57.605 "block_size": 512, 00:11:57.605 "num_blocks": 65536, 00:11:57.605 "uuid": "884110e8-4a20-4a97-aa25-54a6c2cfef63", 00:11:57.605 "assigned_rate_limits": { 00:11:57.605 "rw_ios_per_sec": 0, 00:11:57.605 "rw_mbytes_per_sec": 0, 00:11:57.605 "r_mbytes_per_sec": 0, 00:11:57.605 "w_mbytes_per_sec": 0 00:11:57.605 }, 00:11:57.605 "claimed": true, 00:11:57.605 "claim_type": "exclusive_write", 00:11:57.605 "zoned": false, 00:11:57.605 "supported_io_types": { 00:11:57.605 "read": true, 00:11:57.605 "write": true, 00:11:57.605 "unmap": true, 00:11:57.605 "flush": true, 00:11:57.605 "reset": true, 00:11:57.605 "nvme_admin": false, 00:11:57.605 "nvme_io": false, 00:11:57.605 "nvme_io_md": false, 00:11:57.605 "write_zeroes": true, 00:11:57.605 "zcopy": true, 00:11:57.605 "get_zone_info": false, 00:11:57.605 "zone_management": false, 00:11:57.605 "zone_append": false, 00:11:57.605 "compare": false, 00:11:57.605 "compare_and_write": false, 00:11:57.605 "abort": true, 00:11:57.605 "seek_hole": false, 00:11:57.605 "seek_data": false, 00:11:57.605 "copy": true, 00:11:57.605 "nvme_iov_md": false 00:11:57.605 }, 00:11:57.605 "memory_domains": [ 00:11:57.605 { 00:11:57.605 "dma_device_id": "system", 00:11:57.605 "dma_device_type": 1 00:11:57.605 }, 00:11:57.605 { 00:11:57.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.605 "dma_device_type": 2 00:11:57.605 } 00:11:57.605 ], 00:11:57.605 "driver_specific": {} 00:11:57.605 } 00:11:57.605 ] 00:11:57.605 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.606 "name": "Existed_Raid", 00:11:57.606 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:57.606 "strip_size_kb": 64, 00:11:57.606 "state": "online", 00:11:57.606 "raid_level": "concat", 00:11:57.606 "superblock": true, 00:11:57.606 "num_base_bdevs": 4, 00:11:57.606 "num_base_bdevs_discovered": 4, 00:11:57.606 "num_base_bdevs_operational": 4, 00:11:57.606 "base_bdevs_list": [ 00:11:57.606 { 00:11:57.606 "name": "BaseBdev1", 00:11:57.606 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:57.606 "is_configured": true, 00:11:57.606 "data_offset": 2048, 00:11:57.606 "data_size": 63488 00:11:57.606 }, 00:11:57.606 { 00:11:57.606 "name": "BaseBdev2", 00:11:57.606 "uuid": "fd41595c-48ef-4a64-9166-166c5bc3579b", 00:11:57.606 "is_configured": true, 00:11:57.606 "data_offset": 2048, 00:11:57.606 "data_size": 63488 00:11:57.606 }, 00:11:57.606 { 00:11:57.606 "name": "BaseBdev3", 00:11:57.606 "uuid": "4eb8189e-8b58-4bc5-b6c6-1a8f006d8c20", 00:11:57.606 "is_configured": true, 00:11:57.606 "data_offset": 2048, 00:11:57.606 "data_size": 63488 00:11:57.606 }, 00:11:57.606 { 00:11:57.606 "name": "BaseBdev4", 00:11:57.606 "uuid": "884110e8-4a20-4a97-aa25-54a6c2cfef63", 00:11:57.606 "is_configured": true, 00:11:57.606 "data_offset": 2048, 00:11:57.606 "data_size": 63488 00:11:57.606 } 00:11:57.606 ] 00:11:57.606 }' 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.606 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.173 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.173 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.173 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.173 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.173 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.173 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 [2024-11-08 16:53:27.529369] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.174 "name": "Existed_Raid", 00:11:58.174 "aliases": [ 00:11:58.174 "72709421-f828-4380-a1d6-334bac03ad66" 00:11:58.174 ], 00:11:58.174 "product_name": "Raid Volume", 00:11:58.174 "block_size": 512, 00:11:58.174 "num_blocks": 253952, 00:11:58.174 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:58.174 "assigned_rate_limits": { 00:11:58.174 "rw_ios_per_sec": 0, 00:11:58.174 "rw_mbytes_per_sec": 0, 00:11:58.174 "r_mbytes_per_sec": 0, 00:11:58.174 "w_mbytes_per_sec": 0 00:11:58.174 }, 00:11:58.174 "claimed": false, 00:11:58.174 "zoned": false, 00:11:58.174 "supported_io_types": { 00:11:58.174 "read": true, 00:11:58.174 "write": true, 00:11:58.174 "unmap": true, 00:11:58.174 "flush": true, 00:11:58.174 "reset": true, 00:11:58.174 "nvme_admin": false, 00:11:58.174 "nvme_io": false, 00:11:58.174 "nvme_io_md": false, 00:11:58.174 "write_zeroes": true, 00:11:58.174 "zcopy": false, 00:11:58.174 "get_zone_info": false, 00:11:58.174 "zone_management": false, 00:11:58.174 "zone_append": false, 00:11:58.174 "compare": false, 00:11:58.174 "compare_and_write": false, 00:11:58.174 "abort": false, 00:11:58.174 "seek_hole": false, 00:11:58.174 "seek_data": false, 00:11:58.174 "copy": false, 00:11:58.174 "nvme_iov_md": false 00:11:58.174 }, 00:11:58.174 "memory_domains": [ 00:11:58.174 { 00:11:58.174 "dma_device_id": "system", 00:11:58.174 "dma_device_type": 1 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.174 "dma_device_type": 2 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "system", 00:11:58.174 "dma_device_type": 1 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.174 "dma_device_type": 2 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "system", 00:11:58.174 "dma_device_type": 1 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.174 "dma_device_type": 2 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "system", 00:11:58.174 "dma_device_type": 1 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.174 "dma_device_type": 2 00:11:58.174 } 00:11:58.174 ], 00:11:58.174 "driver_specific": { 00:11:58.174 "raid": { 00:11:58.174 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:58.174 "strip_size_kb": 64, 00:11:58.174 "state": "online", 00:11:58.174 "raid_level": "concat", 00:11:58.174 "superblock": true, 00:11:58.174 "num_base_bdevs": 4, 00:11:58.174 "num_base_bdevs_discovered": 4, 00:11:58.174 "num_base_bdevs_operational": 4, 00:11:58.174 "base_bdevs_list": [ 00:11:58.174 { 00:11:58.174 "name": "BaseBdev1", 00:11:58.174 "uuid": "19a6ea66-c2aa-49e2-9659-79d1b46ec95b", 00:11:58.174 "is_configured": true, 00:11:58.174 "data_offset": 2048, 00:11:58.174 "data_size": 63488 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "name": "BaseBdev2", 00:11:58.174 "uuid": "fd41595c-48ef-4a64-9166-166c5bc3579b", 00:11:58.174 "is_configured": true, 00:11:58.174 "data_offset": 2048, 00:11:58.174 "data_size": 63488 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "name": "BaseBdev3", 00:11:58.174 "uuid": "4eb8189e-8b58-4bc5-b6c6-1a8f006d8c20", 00:11:58.174 "is_configured": true, 00:11:58.174 "data_offset": 2048, 00:11:58.174 "data_size": 63488 00:11:58.174 }, 00:11:58.174 { 00:11:58.174 "name": "BaseBdev4", 00:11:58.174 "uuid": "884110e8-4a20-4a97-aa25-54a6c2cfef63", 00:11:58.174 "is_configured": true, 00:11:58.174 "data_offset": 2048, 00:11:58.174 "data_size": 63488 00:11:58.174 } 00:11:58.174 ] 00:11:58.174 } 00:11:58.174 } 00:11:58.174 }' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:58.174 BaseBdev2 00:11:58.174 BaseBdev3 00:11:58.174 BaseBdev4' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.433 [2024-11-08 16:53:27.836489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.433 [2024-11-08 16:53:27.836579] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.433 [2024-11-08 16:53:27.836716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.433 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.434 "name": "Existed_Raid", 00:11:58.434 "uuid": "72709421-f828-4380-a1d6-334bac03ad66", 00:11:58.434 "strip_size_kb": 64, 00:11:58.434 "state": "offline", 00:11:58.434 "raid_level": "concat", 00:11:58.434 "superblock": true, 00:11:58.434 "num_base_bdevs": 4, 00:11:58.434 "num_base_bdevs_discovered": 3, 00:11:58.434 "num_base_bdevs_operational": 3, 00:11:58.434 "base_bdevs_list": [ 00:11:58.434 { 00:11:58.434 "name": null, 00:11:58.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.434 "is_configured": false, 00:11:58.434 "data_offset": 0, 00:11:58.434 "data_size": 63488 00:11:58.434 }, 00:11:58.434 { 00:11:58.434 "name": "BaseBdev2", 00:11:58.434 "uuid": "fd41595c-48ef-4a64-9166-166c5bc3579b", 00:11:58.434 "is_configured": true, 00:11:58.434 "data_offset": 2048, 00:11:58.434 "data_size": 63488 00:11:58.434 }, 00:11:58.434 { 00:11:58.434 "name": "BaseBdev3", 00:11:58.434 "uuid": "4eb8189e-8b58-4bc5-b6c6-1a8f006d8c20", 00:11:58.434 "is_configured": true, 00:11:58.434 "data_offset": 2048, 00:11:58.434 "data_size": 63488 00:11:58.434 }, 00:11:58.434 { 00:11:58.434 "name": "BaseBdev4", 00:11:58.434 "uuid": "884110e8-4a20-4a97-aa25-54a6c2cfef63", 00:11:58.434 "is_configured": true, 00:11:58.434 "data_offset": 2048, 00:11:58.434 "data_size": 63488 00:11:58.434 } 00:11:58.434 ] 00:11:58.434 }' 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.434 16:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 [2024-11-08 16:53:28.379744] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 [2024-11-08 16:53:28.451099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.003 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 [2024-11-08 16:53:28.522367] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:59.003 [2024-11-08 16:53:28.522460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.263 BaseBdev2 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:59.263 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 [ 00:11:59.264 { 00:11:59.264 "name": "BaseBdev2", 00:11:59.264 "aliases": [ 00:11:59.264 "b201aa0e-e653-44a7-8af2-0665deb768f6" 00:11:59.264 ], 00:11:59.264 "product_name": "Malloc disk", 00:11:59.264 "block_size": 512, 00:11:59.264 "num_blocks": 65536, 00:11:59.264 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:11:59.264 "assigned_rate_limits": { 00:11:59.264 "rw_ios_per_sec": 0, 00:11:59.264 "rw_mbytes_per_sec": 0, 00:11:59.264 "r_mbytes_per_sec": 0, 00:11:59.264 "w_mbytes_per_sec": 0 00:11:59.264 }, 00:11:59.264 "claimed": false, 00:11:59.264 "zoned": false, 00:11:59.264 "supported_io_types": { 00:11:59.264 "read": true, 00:11:59.264 "write": true, 00:11:59.264 "unmap": true, 00:11:59.264 "flush": true, 00:11:59.264 "reset": true, 00:11:59.264 "nvme_admin": false, 00:11:59.264 "nvme_io": false, 00:11:59.264 "nvme_io_md": false, 00:11:59.264 "write_zeroes": true, 00:11:59.264 "zcopy": true, 00:11:59.264 "get_zone_info": false, 00:11:59.264 "zone_management": false, 00:11:59.264 "zone_append": false, 00:11:59.264 "compare": false, 00:11:59.264 "compare_and_write": false, 00:11:59.264 "abort": true, 00:11:59.264 "seek_hole": false, 00:11:59.264 "seek_data": false, 00:11:59.264 "copy": true, 00:11:59.264 "nvme_iov_md": false 00:11:59.264 }, 00:11:59.264 "memory_domains": [ 00:11:59.264 { 00:11:59.264 "dma_device_id": "system", 00:11:59.264 "dma_device_type": 1 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.264 "dma_device_type": 2 00:11:59.264 } 00:11:59.264 ], 00:11:59.264 "driver_specific": {} 00:11:59.264 } 00:11:59.264 ] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 BaseBdev3 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 [ 00:11:59.264 { 00:11:59.264 "name": "BaseBdev3", 00:11:59.264 "aliases": [ 00:11:59.264 "5a8a5004-0713-47e0-a30e-d596e49b66f7" 00:11:59.264 ], 00:11:59.264 "product_name": "Malloc disk", 00:11:59.264 "block_size": 512, 00:11:59.264 "num_blocks": 65536, 00:11:59.264 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:11:59.264 "assigned_rate_limits": { 00:11:59.264 "rw_ios_per_sec": 0, 00:11:59.264 "rw_mbytes_per_sec": 0, 00:11:59.264 "r_mbytes_per_sec": 0, 00:11:59.264 "w_mbytes_per_sec": 0 00:11:59.264 }, 00:11:59.264 "claimed": false, 00:11:59.264 "zoned": false, 00:11:59.264 "supported_io_types": { 00:11:59.264 "read": true, 00:11:59.264 "write": true, 00:11:59.264 "unmap": true, 00:11:59.264 "flush": true, 00:11:59.264 "reset": true, 00:11:59.264 "nvme_admin": false, 00:11:59.264 "nvme_io": false, 00:11:59.264 "nvme_io_md": false, 00:11:59.264 "write_zeroes": true, 00:11:59.264 "zcopy": true, 00:11:59.264 "get_zone_info": false, 00:11:59.264 "zone_management": false, 00:11:59.264 "zone_append": false, 00:11:59.264 "compare": false, 00:11:59.264 "compare_and_write": false, 00:11:59.264 "abort": true, 00:11:59.264 "seek_hole": false, 00:11:59.264 "seek_data": false, 00:11:59.264 "copy": true, 00:11:59.264 "nvme_iov_md": false 00:11:59.264 }, 00:11:59.264 "memory_domains": [ 00:11:59.264 { 00:11:59.264 "dma_device_id": "system", 00:11:59.264 "dma_device_type": 1 00:11:59.264 }, 00:11:59.264 { 00:11:59.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.264 "dma_device_type": 2 00:11:59.264 } 00:11:59.264 ], 00:11:59.264 "driver_specific": {} 00:11:59.264 } 00:11:59.264 ] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 BaseBdev4 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.264 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.264 [ 00:11:59.264 { 00:11:59.264 "name": "BaseBdev4", 00:11:59.264 "aliases": [ 00:11:59.264 "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9" 00:11:59.264 ], 00:11:59.264 "product_name": "Malloc disk", 00:11:59.264 "block_size": 512, 00:11:59.264 "num_blocks": 65536, 00:11:59.264 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:11:59.264 "assigned_rate_limits": { 00:11:59.264 "rw_ios_per_sec": 0, 00:11:59.264 "rw_mbytes_per_sec": 0, 00:11:59.264 "r_mbytes_per_sec": 0, 00:11:59.264 "w_mbytes_per_sec": 0 00:11:59.264 }, 00:11:59.264 "claimed": false, 00:11:59.264 "zoned": false, 00:11:59.264 "supported_io_types": { 00:11:59.264 "read": true, 00:11:59.264 "write": true, 00:11:59.264 "unmap": true, 00:11:59.264 "flush": true, 00:11:59.264 "reset": true, 00:11:59.264 "nvme_admin": false, 00:11:59.264 "nvme_io": false, 00:11:59.264 "nvme_io_md": false, 00:11:59.264 "write_zeroes": true, 00:11:59.264 "zcopy": true, 00:11:59.265 "get_zone_info": false, 00:11:59.265 "zone_management": false, 00:11:59.265 "zone_append": false, 00:11:59.265 "compare": false, 00:11:59.265 "compare_and_write": false, 00:11:59.265 "abort": true, 00:11:59.265 "seek_hole": false, 00:11:59.265 "seek_data": false, 00:11:59.265 "copy": true, 00:11:59.265 "nvme_iov_md": false 00:11:59.265 }, 00:11:59.265 "memory_domains": [ 00:11:59.265 { 00:11:59.265 "dma_device_id": "system", 00:11:59.265 "dma_device_type": 1 00:11:59.265 }, 00:11:59.265 { 00:11:59.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.265 "dma_device_type": 2 00:11:59.265 } 00:11:59.265 ], 00:11:59.265 "driver_specific": {} 00:11:59.265 } 00:11:59.265 ] 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.265 [2024-11-08 16:53:28.760083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.265 [2024-11-08 16:53:28.760188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.265 [2024-11-08 16:53:28.760237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.265 [2024-11-08 16:53:28.762121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.265 [2024-11-08 16:53:28.762212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.265 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.524 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.524 "name": "Existed_Raid", 00:11:59.524 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:11:59.524 "strip_size_kb": 64, 00:11:59.524 "state": "configuring", 00:11:59.524 "raid_level": "concat", 00:11:59.524 "superblock": true, 00:11:59.524 "num_base_bdevs": 4, 00:11:59.524 "num_base_bdevs_discovered": 3, 00:11:59.524 "num_base_bdevs_operational": 4, 00:11:59.524 "base_bdevs_list": [ 00:11:59.524 { 00:11:59.524 "name": "BaseBdev1", 00:11:59.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.524 "is_configured": false, 00:11:59.524 "data_offset": 0, 00:11:59.524 "data_size": 0 00:11:59.524 }, 00:11:59.524 { 00:11:59.525 "name": "BaseBdev2", 00:11:59.525 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:11:59.525 "is_configured": true, 00:11:59.525 "data_offset": 2048, 00:11:59.525 "data_size": 63488 00:11:59.525 }, 00:11:59.525 { 00:11:59.525 "name": "BaseBdev3", 00:11:59.525 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:11:59.525 "is_configured": true, 00:11:59.525 "data_offset": 2048, 00:11:59.525 "data_size": 63488 00:11:59.525 }, 00:11:59.525 { 00:11:59.525 "name": "BaseBdev4", 00:11:59.525 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:11:59.525 "is_configured": true, 00:11:59.525 "data_offset": 2048, 00:11:59.525 "data_size": 63488 00:11:59.525 } 00:11:59.525 ] 00:11:59.525 }' 00:11:59.525 16:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.525 16:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 [2024-11-08 16:53:29.251359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.784 "name": "Existed_Raid", 00:11:59.784 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:11:59.784 "strip_size_kb": 64, 00:11:59.784 "state": "configuring", 00:11:59.784 "raid_level": "concat", 00:11:59.784 "superblock": true, 00:11:59.784 "num_base_bdevs": 4, 00:11:59.784 "num_base_bdevs_discovered": 2, 00:11:59.784 "num_base_bdevs_operational": 4, 00:11:59.784 "base_bdevs_list": [ 00:11:59.784 { 00:11:59.784 "name": "BaseBdev1", 00:11:59.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.784 "is_configured": false, 00:11:59.784 "data_offset": 0, 00:11:59.784 "data_size": 0 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": null, 00:11:59.784 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:11:59.784 "is_configured": false, 00:11:59.784 "data_offset": 0, 00:11:59.784 "data_size": 63488 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": "BaseBdev3", 00:11:59.784 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 2048, 00:11:59.784 "data_size": 63488 00:11:59.784 }, 00:11:59.784 { 00:11:59.784 "name": "BaseBdev4", 00:11:59.784 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:11:59.784 "is_configured": true, 00:11:59.784 "data_offset": 2048, 00:11:59.784 "data_size": 63488 00:11:59.784 } 00:11:59.784 ] 00:11:59.784 }' 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.784 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 [2024-11-08 16:53:29.797601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.351 BaseBdev1 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 [ 00:12:00.351 { 00:12:00.351 "name": "BaseBdev1", 00:12:00.351 "aliases": [ 00:12:00.351 "0e129c46-e812-4f64-9d1d-f550e954a7ec" 00:12:00.351 ], 00:12:00.351 "product_name": "Malloc disk", 00:12:00.351 "block_size": 512, 00:12:00.351 "num_blocks": 65536, 00:12:00.351 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:00.351 "assigned_rate_limits": { 00:12:00.351 "rw_ios_per_sec": 0, 00:12:00.351 "rw_mbytes_per_sec": 0, 00:12:00.351 "r_mbytes_per_sec": 0, 00:12:00.351 "w_mbytes_per_sec": 0 00:12:00.351 }, 00:12:00.351 "claimed": true, 00:12:00.351 "claim_type": "exclusive_write", 00:12:00.351 "zoned": false, 00:12:00.351 "supported_io_types": { 00:12:00.351 "read": true, 00:12:00.351 "write": true, 00:12:00.351 "unmap": true, 00:12:00.351 "flush": true, 00:12:00.351 "reset": true, 00:12:00.351 "nvme_admin": false, 00:12:00.351 "nvme_io": false, 00:12:00.351 "nvme_io_md": false, 00:12:00.351 "write_zeroes": true, 00:12:00.351 "zcopy": true, 00:12:00.351 "get_zone_info": false, 00:12:00.351 "zone_management": false, 00:12:00.351 "zone_append": false, 00:12:00.351 "compare": false, 00:12:00.351 "compare_and_write": false, 00:12:00.351 "abort": true, 00:12:00.351 "seek_hole": false, 00:12:00.351 "seek_data": false, 00:12:00.351 "copy": true, 00:12:00.351 "nvme_iov_md": false 00:12:00.351 }, 00:12:00.351 "memory_domains": [ 00:12:00.351 { 00:12:00.351 "dma_device_id": "system", 00:12:00.351 "dma_device_type": 1 00:12:00.351 }, 00:12:00.351 { 00:12:00.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.351 "dma_device_type": 2 00:12:00.351 } 00:12:00.351 ], 00:12:00.351 "driver_specific": {} 00:12:00.351 } 00:12:00.351 ] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.611 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.611 "name": "Existed_Raid", 00:12:00.611 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:00.611 "strip_size_kb": 64, 00:12:00.611 "state": "configuring", 00:12:00.611 "raid_level": "concat", 00:12:00.611 "superblock": true, 00:12:00.611 "num_base_bdevs": 4, 00:12:00.611 "num_base_bdevs_discovered": 3, 00:12:00.611 "num_base_bdevs_operational": 4, 00:12:00.611 "base_bdevs_list": [ 00:12:00.611 { 00:12:00.611 "name": "BaseBdev1", 00:12:00.611 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:00.611 "is_configured": true, 00:12:00.611 "data_offset": 2048, 00:12:00.611 "data_size": 63488 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "name": null, 00:12:00.611 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:00.611 "is_configured": false, 00:12:00.611 "data_offset": 0, 00:12:00.611 "data_size": 63488 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "name": "BaseBdev3", 00:12:00.611 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:00.611 "is_configured": true, 00:12:00.611 "data_offset": 2048, 00:12:00.611 "data_size": 63488 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "name": "BaseBdev4", 00:12:00.611 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:00.611 "is_configured": true, 00:12:00.611 "data_offset": 2048, 00:12:00.611 "data_size": 63488 00:12:00.611 } 00:12:00.611 ] 00:12:00.611 }' 00:12:00.611 16:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.611 16:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.871 [2024-11-08 16:53:30.324736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.871 "name": "Existed_Raid", 00:12:00.871 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:00.871 "strip_size_kb": 64, 00:12:00.871 "state": "configuring", 00:12:00.871 "raid_level": "concat", 00:12:00.871 "superblock": true, 00:12:00.871 "num_base_bdevs": 4, 00:12:00.871 "num_base_bdevs_discovered": 2, 00:12:00.871 "num_base_bdevs_operational": 4, 00:12:00.871 "base_bdevs_list": [ 00:12:00.871 { 00:12:00.871 "name": "BaseBdev1", 00:12:00.871 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:00.871 "is_configured": true, 00:12:00.871 "data_offset": 2048, 00:12:00.871 "data_size": 63488 00:12:00.871 }, 00:12:00.871 { 00:12:00.871 "name": null, 00:12:00.871 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:00.871 "is_configured": false, 00:12:00.871 "data_offset": 0, 00:12:00.871 "data_size": 63488 00:12:00.871 }, 00:12:00.871 { 00:12:00.871 "name": null, 00:12:00.871 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:00.871 "is_configured": false, 00:12:00.871 "data_offset": 0, 00:12:00.871 "data_size": 63488 00:12:00.871 }, 00:12:00.871 { 00:12:00.871 "name": "BaseBdev4", 00:12:00.871 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:00.871 "is_configured": true, 00:12:00.871 "data_offset": 2048, 00:12:00.871 "data_size": 63488 00:12:00.871 } 00:12:00.871 ] 00:12:00.871 }' 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.871 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.440 [2024-11-08 16:53:30.776031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.440 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.440 "name": "Existed_Raid", 00:12:01.440 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:01.440 "strip_size_kb": 64, 00:12:01.440 "state": "configuring", 00:12:01.440 "raid_level": "concat", 00:12:01.440 "superblock": true, 00:12:01.440 "num_base_bdevs": 4, 00:12:01.440 "num_base_bdevs_discovered": 3, 00:12:01.440 "num_base_bdevs_operational": 4, 00:12:01.440 "base_bdevs_list": [ 00:12:01.440 { 00:12:01.440 "name": "BaseBdev1", 00:12:01.440 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:01.440 "is_configured": true, 00:12:01.440 "data_offset": 2048, 00:12:01.440 "data_size": 63488 00:12:01.440 }, 00:12:01.440 { 00:12:01.440 "name": null, 00:12:01.440 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:01.440 "is_configured": false, 00:12:01.440 "data_offset": 0, 00:12:01.440 "data_size": 63488 00:12:01.441 }, 00:12:01.441 { 00:12:01.441 "name": "BaseBdev3", 00:12:01.441 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:01.441 "is_configured": true, 00:12:01.441 "data_offset": 2048, 00:12:01.441 "data_size": 63488 00:12:01.441 }, 00:12:01.441 { 00:12:01.441 "name": "BaseBdev4", 00:12:01.441 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:01.441 "is_configured": true, 00:12:01.441 "data_offset": 2048, 00:12:01.441 "data_size": 63488 00:12:01.441 } 00:12:01.441 ] 00:12:01.441 }' 00:12:01.441 16:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.441 16:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.010 [2024-11-08 16:53:31.319327] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.010 "name": "Existed_Raid", 00:12:02.010 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:02.010 "strip_size_kb": 64, 00:12:02.010 "state": "configuring", 00:12:02.010 "raid_level": "concat", 00:12:02.010 "superblock": true, 00:12:02.010 "num_base_bdevs": 4, 00:12:02.010 "num_base_bdevs_discovered": 2, 00:12:02.010 "num_base_bdevs_operational": 4, 00:12:02.010 "base_bdevs_list": [ 00:12:02.010 { 00:12:02.010 "name": null, 00:12:02.010 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:02.010 "is_configured": false, 00:12:02.010 "data_offset": 0, 00:12:02.010 "data_size": 63488 00:12:02.010 }, 00:12:02.010 { 00:12:02.010 "name": null, 00:12:02.010 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:02.010 "is_configured": false, 00:12:02.010 "data_offset": 0, 00:12:02.010 "data_size": 63488 00:12:02.010 }, 00:12:02.010 { 00:12:02.010 "name": "BaseBdev3", 00:12:02.010 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:02.010 "is_configured": true, 00:12:02.010 "data_offset": 2048, 00:12:02.010 "data_size": 63488 00:12:02.010 }, 00:12:02.010 { 00:12:02.010 "name": "BaseBdev4", 00:12:02.010 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:02.010 "is_configured": true, 00:12:02.010 "data_offset": 2048, 00:12:02.010 "data_size": 63488 00:12:02.010 } 00:12:02.010 ] 00:12:02.010 }' 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.010 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.336 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.336 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.336 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.336 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.613 [2024-11-08 16:53:31.877382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.613 "name": "Existed_Raid", 00:12:02.613 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:02.613 "strip_size_kb": 64, 00:12:02.613 "state": "configuring", 00:12:02.613 "raid_level": "concat", 00:12:02.613 "superblock": true, 00:12:02.613 "num_base_bdevs": 4, 00:12:02.613 "num_base_bdevs_discovered": 3, 00:12:02.613 "num_base_bdevs_operational": 4, 00:12:02.613 "base_bdevs_list": [ 00:12:02.613 { 00:12:02.613 "name": null, 00:12:02.613 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:02.613 "is_configured": false, 00:12:02.613 "data_offset": 0, 00:12:02.613 "data_size": 63488 00:12:02.613 }, 00:12:02.613 { 00:12:02.613 "name": "BaseBdev2", 00:12:02.613 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:02.613 "is_configured": true, 00:12:02.613 "data_offset": 2048, 00:12:02.613 "data_size": 63488 00:12:02.613 }, 00:12:02.613 { 00:12:02.613 "name": "BaseBdev3", 00:12:02.613 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:02.613 "is_configured": true, 00:12:02.613 "data_offset": 2048, 00:12:02.613 "data_size": 63488 00:12:02.613 }, 00:12:02.613 { 00:12:02.613 "name": "BaseBdev4", 00:12:02.613 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:02.613 "is_configured": true, 00:12:02.613 "data_offset": 2048, 00:12:02.613 "data_size": 63488 00:12:02.613 } 00:12:02.613 ] 00:12:02.613 }' 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.613 16:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.872 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0e129c46-e812-4f64-9d1d-f550e954a7ec 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.131 NewBaseBdev 00:12:03.131 [2024-11-08 16:53:32.415917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:03.131 [2024-11-08 16:53:32.416131] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:03.131 [2024-11-08 16:53:32.416146] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:03.131 [2024-11-08 16:53:32.416437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:03.131 [2024-11-08 16:53:32.416572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:03.131 [2024-11-08 16:53:32.416587] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:03.131 [2024-11-08 16:53:32.416718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.131 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.131 [ 00:12:03.131 { 00:12:03.131 "name": "NewBaseBdev", 00:12:03.131 "aliases": [ 00:12:03.131 "0e129c46-e812-4f64-9d1d-f550e954a7ec" 00:12:03.131 ], 00:12:03.131 "product_name": "Malloc disk", 00:12:03.131 "block_size": 512, 00:12:03.131 "num_blocks": 65536, 00:12:03.132 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:03.132 "assigned_rate_limits": { 00:12:03.132 "rw_ios_per_sec": 0, 00:12:03.132 "rw_mbytes_per_sec": 0, 00:12:03.132 "r_mbytes_per_sec": 0, 00:12:03.132 "w_mbytes_per_sec": 0 00:12:03.132 }, 00:12:03.132 "claimed": true, 00:12:03.132 "claim_type": "exclusive_write", 00:12:03.132 "zoned": false, 00:12:03.132 "supported_io_types": { 00:12:03.132 "read": true, 00:12:03.132 "write": true, 00:12:03.132 "unmap": true, 00:12:03.132 "flush": true, 00:12:03.132 "reset": true, 00:12:03.132 "nvme_admin": false, 00:12:03.132 "nvme_io": false, 00:12:03.132 "nvme_io_md": false, 00:12:03.132 "write_zeroes": true, 00:12:03.132 "zcopy": true, 00:12:03.132 "get_zone_info": false, 00:12:03.132 "zone_management": false, 00:12:03.132 "zone_append": false, 00:12:03.132 "compare": false, 00:12:03.132 "compare_and_write": false, 00:12:03.132 "abort": true, 00:12:03.132 "seek_hole": false, 00:12:03.132 "seek_data": false, 00:12:03.132 "copy": true, 00:12:03.132 "nvme_iov_md": false 00:12:03.132 }, 00:12:03.132 "memory_domains": [ 00:12:03.132 { 00:12:03.132 "dma_device_id": "system", 00:12:03.132 "dma_device_type": 1 00:12:03.132 }, 00:12:03.132 { 00:12:03.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.132 "dma_device_type": 2 00:12:03.132 } 00:12:03.132 ], 00:12:03.132 "driver_specific": {} 00:12:03.132 } 00:12:03.132 ] 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.132 "name": "Existed_Raid", 00:12:03.132 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:03.132 "strip_size_kb": 64, 00:12:03.132 "state": "online", 00:12:03.132 "raid_level": "concat", 00:12:03.132 "superblock": true, 00:12:03.132 "num_base_bdevs": 4, 00:12:03.132 "num_base_bdevs_discovered": 4, 00:12:03.132 "num_base_bdevs_operational": 4, 00:12:03.132 "base_bdevs_list": [ 00:12:03.132 { 00:12:03.132 "name": "NewBaseBdev", 00:12:03.132 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:03.132 "is_configured": true, 00:12:03.132 "data_offset": 2048, 00:12:03.132 "data_size": 63488 00:12:03.132 }, 00:12:03.132 { 00:12:03.132 "name": "BaseBdev2", 00:12:03.132 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:03.132 "is_configured": true, 00:12:03.132 "data_offset": 2048, 00:12:03.132 "data_size": 63488 00:12:03.132 }, 00:12:03.132 { 00:12:03.132 "name": "BaseBdev3", 00:12:03.132 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:03.132 "is_configured": true, 00:12:03.132 "data_offset": 2048, 00:12:03.132 "data_size": 63488 00:12:03.132 }, 00:12:03.132 { 00:12:03.132 "name": "BaseBdev4", 00:12:03.132 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:03.132 "is_configured": true, 00:12:03.132 "data_offset": 2048, 00:12:03.132 "data_size": 63488 00:12:03.132 } 00:12:03.132 ] 00:12:03.132 }' 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.132 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.702 [2024-11-08 16:53:32.975651] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.702 16:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.702 "name": "Existed_Raid", 00:12:03.702 "aliases": [ 00:12:03.702 "b4453e19-afe7-46d2-9e4c-46f8790f84ef" 00:12:03.702 ], 00:12:03.702 "product_name": "Raid Volume", 00:12:03.702 "block_size": 512, 00:12:03.702 "num_blocks": 253952, 00:12:03.702 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:03.702 "assigned_rate_limits": { 00:12:03.702 "rw_ios_per_sec": 0, 00:12:03.702 "rw_mbytes_per_sec": 0, 00:12:03.702 "r_mbytes_per_sec": 0, 00:12:03.702 "w_mbytes_per_sec": 0 00:12:03.702 }, 00:12:03.702 "claimed": false, 00:12:03.702 "zoned": false, 00:12:03.702 "supported_io_types": { 00:12:03.702 "read": true, 00:12:03.702 "write": true, 00:12:03.702 "unmap": true, 00:12:03.702 "flush": true, 00:12:03.702 "reset": true, 00:12:03.702 "nvme_admin": false, 00:12:03.702 "nvme_io": false, 00:12:03.702 "nvme_io_md": false, 00:12:03.702 "write_zeroes": true, 00:12:03.702 "zcopy": false, 00:12:03.702 "get_zone_info": false, 00:12:03.702 "zone_management": false, 00:12:03.702 "zone_append": false, 00:12:03.702 "compare": false, 00:12:03.702 "compare_and_write": false, 00:12:03.702 "abort": false, 00:12:03.702 "seek_hole": false, 00:12:03.702 "seek_data": false, 00:12:03.702 "copy": false, 00:12:03.702 "nvme_iov_md": false 00:12:03.702 }, 00:12:03.702 "memory_domains": [ 00:12:03.702 { 00:12:03.702 "dma_device_id": "system", 00:12:03.702 "dma_device_type": 1 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.702 "dma_device_type": 2 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "system", 00:12:03.702 "dma_device_type": 1 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.702 "dma_device_type": 2 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "system", 00:12:03.702 "dma_device_type": 1 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.702 "dma_device_type": 2 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "system", 00:12:03.702 "dma_device_type": 1 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.702 "dma_device_type": 2 00:12:03.702 } 00:12:03.702 ], 00:12:03.702 "driver_specific": { 00:12:03.702 "raid": { 00:12:03.702 "uuid": "b4453e19-afe7-46d2-9e4c-46f8790f84ef", 00:12:03.702 "strip_size_kb": 64, 00:12:03.702 "state": "online", 00:12:03.702 "raid_level": "concat", 00:12:03.702 "superblock": true, 00:12:03.702 "num_base_bdevs": 4, 00:12:03.702 "num_base_bdevs_discovered": 4, 00:12:03.702 "num_base_bdevs_operational": 4, 00:12:03.702 "base_bdevs_list": [ 00:12:03.702 { 00:12:03.702 "name": "NewBaseBdev", 00:12:03.702 "uuid": "0e129c46-e812-4f64-9d1d-f550e954a7ec", 00:12:03.702 "is_configured": true, 00:12:03.702 "data_offset": 2048, 00:12:03.702 "data_size": 63488 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "name": "BaseBdev2", 00:12:03.702 "uuid": "b201aa0e-e653-44a7-8af2-0665deb768f6", 00:12:03.702 "is_configured": true, 00:12:03.702 "data_offset": 2048, 00:12:03.702 "data_size": 63488 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "name": "BaseBdev3", 00:12:03.702 "uuid": "5a8a5004-0713-47e0-a30e-d596e49b66f7", 00:12:03.702 "is_configured": true, 00:12:03.702 "data_offset": 2048, 00:12:03.702 "data_size": 63488 00:12:03.702 }, 00:12:03.702 { 00:12:03.702 "name": "BaseBdev4", 00:12:03.702 "uuid": "86cbc6fc-169b-479e-8d6e-78f0ba8d33f9", 00:12:03.702 "is_configured": true, 00:12:03.702 "data_offset": 2048, 00:12:03.702 "data_size": 63488 00:12:03.702 } 00:12:03.702 ] 00:12:03.702 } 00:12:03.702 } 00:12:03.702 }' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:03.702 BaseBdev2 00:12:03.702 BaseBdev3 00:12:03.702 BaseBdev4' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.702 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.703 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.963 [2024-11-08 16:53:33.319234] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.963 [2024-11-08 16:53:33.319276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.963 [2024-11-08 16:53:33.319379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.963 [2024-11-08 16:53:33.319460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.963 [2024-11-08 16:53:33.319472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82821 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82821 ']' 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82821 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82821 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82821' 00:12:03.963 killing process with pid 82821 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82821 00:12:03.963 [2024-11-08 16:53:33.352680] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.963 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82821 00:12:03.963 [2024-11-08 16:53:33.396834] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.222 16:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:04.222 ************************************ 00:12:04.222 END TEST raid_state_function_test_sb 00:12:04.222 ************************************ 00:12:04.222 00:12:04.222 real 0m10.060s 00:12:04.222 user 0m17.211s 00:12:04.222 sys 0m2.058s 00:12:04.222 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.222 16:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.222 16:53:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:04.222 16:53:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:04.222 16:53:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.222 16:53:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.222 ************************************ 00:12:04.222 START TEST raid_superblock_test 00:12:04.222 ************************************ 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83474 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83474 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83474 ']' 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.222 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.223 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.223 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.223 16:53:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.481 [2024-11-08 16:53:33.816430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:04.481 [2024-11-08 16:53:33.816708] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83474 ] 00:12:04.481 [2024-11-08 16:53:33.968475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.740 [2024-11-08 16:53:34.031960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.740 [2024-11-08 16:53:34.077852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.740 [2024-11-08 16:53:34.077906] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.308 malloc1 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.308 [2024-11-08 16:53:34.802322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:05.308 [2024-11-08 16:53:34.802469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.308 [2024-11-08 16:53:34.802536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:05.308 [2024-11-08 16:53:34.802588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.308 [2024-11-08 16:53:34.805150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.308 [2024-11-08 16:53:34.805238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:05.308 pt1 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.308 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 malloc2 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 [2024-11-08 16:53:34.847568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.569 [2024-11-08 16:53:34.847669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.569 [2024-11-08 16:53:34.847695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:05.569 [2024-11-08 16:53:34.847710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.569 [2024-11-08 16:53:34.850781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.569 [2024-11-08 16:53:34.850832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.569 pt2 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 malloc3 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 [2024-11-08 16:53:34.877187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.569 [2024-11-08 16:53:34.877313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.569 [2024-11-08 16:53:34.877358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:05.569 [2024-11-08 16:53:34.877409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.569 [2024-11-08 16:53:34.880182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.569 [2024-11-08 16:53:34.880278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.569 pt3 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:05.569 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.570 malloc4 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.570 [2024-11-08 16:53:34.910828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.570 [2024-11-08 16:53:34.910946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.570 [2024-11-08 16:53:34.911005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:05.570 [2024-11-08 16:53:34.911053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.570 [2024-11-08 16:53:34.913611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.570 [2024-11-08 16:53:34.913719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.570 pt4 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.570 [2024-11-08 16:53:34.926912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.570 [2024-11-08 16:53:34.929319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.570 [2024-11-08 16:53:34.929395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.570 [2024-11-08 16:53:34.929469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.570 [2024-11-08 16:53:34.929674] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:05.570 [2024-11-08 16:53:34.929693] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:05.570 [2024-11-08 16:53:34.930014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:05.570 [2024-11-08 16:53:34.930203] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:05.570 [2024-11-08 16:53:34.930221] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:05.570 [2024-11-08 16:53:34.930455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.570 "name": "raid_bdev1", 00:12:05.570 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:05.570 "strip_size_kb": 64, 00:12:05.570 "state": "online", 00:12:05.570 "raid_level": "concat", 00:12:05.570 "superblock": true, 00:12:05.570 "num_base_bdevs": 4, 00:12:05.570 "num_base_bdevs_discovered": 4, 00:12:05.570 "num_base_bdevs_operational": 4, 00:12:05.570 "base_bdevs_list": [ 00:12:05.570 { 00:12:05.570 "name": "pt1", 00:12:05.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.570 "is_configured": true, 00:12:05.570 "data_offset": 2048, 00:12:05.570 "data_size": 63488 00:12:05.570 }, 00:12:05.570 { 00:12:05.570 "name": "pt2", 00:12:05.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.570 "is_configured": true, 00:12:05.570 "data_offset": 2048, 00:12:05.570 "data_size": 63488 00:12:05.570 }, 00:12:05.570 { 00:12:05.570 "name": "pt3", 00:12:05.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.570 "is_configured": true, 00:12:05.570 "data_offset": 2048, 00:12:05.570 "data_size": 63488 00:12:05.570 }, 00:12:05.570 { 00:12:05.570 "name": "pt4", 00:12:05.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.570 "is_configured": true, 00:12:05.570 "data_offset": 2048, 00:12:05.570 "data_size": 63488 00:12:05.570 } 00:12:05.570 ] 00:12:05.570 }' 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.570 16:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.140 [2024-11-08 16:53:35.414528] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.140 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.140 "name": "raid_bdev1", 00:12:06.140 "aliases": [ 00:12:06.140 "546094a0-1ccc-4326-99a5-9565c3d91072" 00:12:06.140 ], 00:12:06.140 "product_name": "Raid Volume", 00:12:06.140 "block_size": 512, 00:12:06.140 "num_blocks": 253952, 00:12:06.140 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:06.140 "assigned_rate_limits": { 00:12:06.140 "rw_ios_per_sec": 0, 00:12:06.140 "rw_mbytes_per_sec": 0, 00:12:06.140 "r_mbytes_per_sec": 0, 00:12:06.140 "w_mbytes_per_sec": 0 00:12:06.140 }, 00:12:06.140 "claimed": false, 00:12:06.140 "zoned": false, 00:12:06.140 "supported_io_types": { 00:12:06.140 "read": true, 00:12:06.140 "write": true, 00:12:06.140 "unmap": true, 00:12:06.140 "flush": true, 00:12:06.140 "reset": true, 00:12:06.140 "nvme_admin": false, 00:12:06.140 "nvme_io": false, 00:12:06.140 "nvme_io_md": false, 00:12:06.140 "write_zeroes": true, 00:12:06.140 "zcopy": false, 00:12:06.140 "get_zone_info": false, 00:12:06.140 "zone_management": false, 00:12:06.140 "zone_append": false, 00:12:06.141 "compare": false, 00:12:06.141 "compare_and_write": false, 00:12:06.141 "abort": false, 00:12:06.141 "seek_hole": false, 00:12:06.141 "seek_data": false, 00:12:06.141 "copy": false, 00:12:06.141 "nvme_iov_md": false 00:12:06.141 }, 00:12:06.141 "memory_domains": [ 00:12:06.141 { 00:12:06.141 "dma_device_id": "system", 00:12:06.141 "dma_device_type": 1 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.141 "dma_device_type": 2 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "system", 00:12:06.141 "dma_device_type": 1 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.141 "dma_device_type": 2 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "system", 00:12:06.141 "dma_device_type": 1 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.141 "dma_device_type": 2 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "system", 00:12:06.141 "dma_device_type": 1 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.141 "dma_device_type": 2 00:12:06.141 } 00:12:06.141 ], 00:12:06.141 "driver_specific": { 00:12:06.141 "raid": { 00:12:06.141 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:06.141 "strip_size_kb": 64, 00:12:06.141 "state": "online", 00:12:06.141 "raid_level": "concat", 00:12:06.141 "superblock": true, 00:12:06.141 "num_base_bdevs": 4, 00:12:06.141 "num_base_bdevs_discovered": 4, 00:12:06.141 "num_base_bdevs_operational": 4, 00:12:06.141 "base_bdevs_list": [ 00:12:06.141 { 00:12:06.141 "name": "pt1", 00:12:06.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "name": "pt2", 00:12:06.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "name": "pt3", 00:12:06.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 }, 00:12:06.141 { 00:12:06.141 "name": "pt4", 00:12:06.141 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.141 "is_configured": true, 00:12:06.141 "data_offset": 2048, 00:12:06.141 "data_size": 63488 00:12:06.141 } 00:12:06.141 ] 00:12:06.141 } 00:12:06.141 } 00:12:06.141 }' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:06.141 pt2 00:12:06.141 pt3 00:12:06.141 pt4' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.141 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.401 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.401 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.401 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.401 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 [2024-11-08 16:53:35.717974] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=546094a0-1ccc-4326-99a5-9565c3d91072 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 546094a0-1ccc-4326-99a5-9565c3d91072 ']' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 [2024-11-08 16:53:35.757531] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:06.402 [2024-11-08 16:53:35.757575] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.402 [2024-11-08 16:53:35.757682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.402 [2024-11-08 16:53:35.757772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.402 [2024-11-08 16:53:35.757796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.402 [2024-11-08 16:53:35.917325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:06.402 [2024-11-08 16:53:35.919563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:06.402 [2024-11-08 16:53:35.919630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:06.402 [2024-11-08 16:53:35.919679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:06.402 [2024-11-08 16:53:35.919742] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:06.402 [2024-11-08 16:53:35.919824] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:06.402 [2024-11-08 16:53:35.919857] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:06.402 [2024-11-08 16:53:35.919889] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:06.402 [2024-11-08 16:53:35.919909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:06.402 [2024-11-08 16:53:35.919920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:12:06.402 request: 00:12:06.402 { 00:12:06.402 "name": "raid_bdev1", 00:12:06.402 "raid_level": "concat", 00:12:06.402 "base_bdevs": [ 00:12:06.402 "malloc1", 00:12:06.402 "malloc2", 00:12:06.402 "malloc3", 00:12:06.402 "malloc4" 00:12:06.402 ], 00:12:06.402 "strip_size_kb": 64, 00:12:06.402 "superblock": false, 00:12:06.402 "method": "bdev_raid_create", 00:12:06.402 "req_id": 1 00:12:06.402 } 00:12:06.402 Got JSON-RPC error response 00:12:06.402 response: 00:12:06.402 { 00:12:06.402 "code": -17, 00:12:06.402 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:06.402 } 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.662 [2024-11-08 16:53:35.977150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:06.662 [2024-11-08 16:53:35.977221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.662 [2024-11-08 16:53:35.977247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:06.662 [2024-11-08 16:53:35.977258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.662 [2024-11-08 16:53:35.979725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.662 [2024-11-08 16:53:35.979768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:06.662 [2024-11-08 16:53:35.979861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:06.662 [2024-11-08 16:53:35.979914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:06.662 pt1 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.662 16:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.662 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.662 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.662 "name": "raid_bdev1", 00:12:06.662 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:06.662 "strip_size_kb": 64, 00:12:06.662 "state": "configuring", 00:12:06.662 "raid_level": "concat", 00:12:06.662 "superblock": true, 00:12:06.662 "num_base_bdevs": 4, 00:12:06.662 "num_base_bdevs_discovered": 1, 00:12:06.662 "num_base_bdevs_operational": 4, 00:12:06.662 "base_bdevs_list": [ 00:12:06.662 { 00:12:06.662 "name": "pt1", 00:12:06.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.662 "is_configured": true, 00:12:06.662 "data_offset": 2048, 00:12:06.662 "data_size": 63488 00:12:06.662 }, 00:12:06.662 { 00:12:06.662 "name": null, 00:12:06.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.662 "is_configured": false, 00:12:06.662 "data_offset": 2048, 00:12:06.662 "data_size": 63488 00:12:06.662 }, 00:12:06.662 { 00:12:06.663 "name": null, 00:12:06.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.663 "is_configured": false, 00:12:06.663 "data_offset": 2048, 00:12:06.663 "data_size": 63488 00:12:06.663 }, 00:12:06.663 { 00:12:06.663 "name": null, 00:12:06.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.663 "is_configured": false, 00:12:06.663 "data_offset": 2048, 00:12:06.663 "data_size": 63488 00:12:06.663 } 00:12:06.663 ] 00:12:06.663 }' 00:12:06.663 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.663 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.922 [2024-11-08 16:53:36.412454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.922 [2024-11-08 16:53:36.412534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.922 [2024-11-08 16:53:36.412560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:06.922 [2024-11-08 16:53:36.412571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.922 [2024-11-08 16:53:36.413041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.922 [2024-11-08 16:53:36.413073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.922 [2024-11-08 16:53:36.413165] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:06.922 [2024-11-08 16:53:36.413195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.922 pt2 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.922 [2024-11-08 16:53:36.424425] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.922 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.923 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.923 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.923 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.923 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.923 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.182 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.182 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.182 "name": "raid_bdev1", 00:12:07.182 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:07.182 "strip_size_kb": 64, 00:12:07.182 "state": "configuring", 00:12:07.182 "raid_level": "concat", 00:12:07.182 "superblock": true, 00:12:07.182 "num_base_bdevs": 4, 00:12:07.182 "num_base_bdevs_discovered": 1, 00:12:07.182 "num_base_bdevs_operational": 4, 00:12:07.182 "base_bdevs_list": [ 00:12:07.182 { 00:12:07.182 "name": "pt1", 00:12:07.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.182 "is_configured": true, 00:12:07.182 "data_offset": 2048, 00:12:07.182 "data_size": 63488 00:12:07.182 }, 00:12:07.182 { 00:12:07.182 "name": null, 00:12:07.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.182 "is_configured": false, 00:12:07.182 "data_offset": 0, 00:12:07.182 "data_size": 63488 00:12:07.182 }, 00:12:07.182 { 00:12:07.182 "name": null, 00:12:07.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.182 "is_configured": false, 00:12:07.182 "data_offset": 2048, 00:12:07.182 "data_size": 63488 00:12:07.182 }, 00:12:07.182 { 00:12:07.182 "name": null, 00:12:07.182 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.182 "is_configured": false, 00:12:07.182 "data_offset": 2048, 00:12:07.182 "data_size": 63488 00:12:07.182 } 00:12:07.182 ] 00:12:07.182 }' 00:12:07.182 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.182 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.442 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:07.442 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.442 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.442 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.442 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.442 [2024-11-08 16:53:36.859718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.442 [2024-11-08 16:53:36.859826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.442 [2024-11-08 16:53:36.859854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:07.442 [2024-11-08 16:53:36.859879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.442 [2024-11-08 16:53:36.860355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.442 [2024-11-08 16:53:36.860393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.442 [2024-11-08 16:53:36.860485] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:07.442 [2024-11-08 16:53:36.860514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.442 pt2 00:12:07.442 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.443 [2024-11-08 16:53:36.871674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.443 [2024-11-08 16:53:36.871764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.443 [2024-11-08 16:53:36.871793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:07.443 [2024-11-08 16:53:36.871809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.443 [2024-11-08 16:53:36.872322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.443 [2024-11-08 16:53:36.872356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.443 [2024-11-08 16:53:36.872453] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:07.443 [2024-11-08 16:53:36.872489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.443 pt3 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.443 [2024-11-08 16:53:36.883656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:07.443 [2024-11-08 16:53:36.883766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.443 [2024-11-08 16:53:36.883791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:07.443 [2024-11-08 16:53:36.883804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.443 [2024-11-08 16:53:36.884253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.443 [2024-11-08 16:53:36.884287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:07.443 [2024-11-08 16:53:36.884373] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:07.443 [2024-11-08 16:53:36.884405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:07.443 [2024-11-08 16:53:36.884532] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:07.443 [2024-11-08 16:53:36.884552] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:07.443 [2024-11-08 16:53:36.884849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:07.443 [2024-11-08 16:53:36.885001] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:07.443 [2024-11-08 16:53:36.885017] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:07.443 [2024-11-08 16:53:36.885147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.443 pt4 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.443 "name": "raid_bdev1", 00:12:07.443 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:07.443 "strip_size_kb": 64, 00:12:07.443 "state": "online", 00:12:07.443 "raid_level": "concat", 00:12:07.443 "superblock": true, 00:12:07.443 "num_base_bdevs": 4, 00:12:07.443 "num_base_bdevs_discovered": 4, 00:12:07.443 "num_base_bdevs_operational": 4, 00:12:07.443 "base_bdevs_list": [ 00:12:07.443 { 00:12:07.443 "name": "pt1", 00:12:07.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.443 "is_configured": true, 00:12:07.443 "data_offset": 2048, 00:12:07.443 "data_size": 63488 00:12:07.443 }, 00:12:07.443 { 00:12:07.443 "name": "pt2", 00:12:07.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.443 "is_configured": true, 00:12:07.443 "data_offset": 2048, 00:12:07.443 "data_size": 63488 00:12:07.443 }, 00:12:07.443 { 00:12:07.443 "name": "pt3", 00:12:07.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.443 "is_configured": true, 00:12:07.443 "data_offset": 2048, 00:12:07.443 "data_size": 63488 00:12:07.443 }, 00:12:07.443 { 00:12:07.443 "name": "pt4", 00:12:07.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.443 "is_configured": true, 00:12:07.443 "data_offset": 2048, 00:12:07.443 "data_size": 63488 00:12:07.443 } 00:12:07.443 ] 00:12:07.443 }' 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.443 16:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.011 [2024-11-08 16:53:37.363684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.011 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.011 "name": "raid_bdev1", 00:12:08.011 "aliases": [ 00:12:08.011 "546094a0-1ccc-4326-99a5-9565c3d91072" 00:12:08.011 ], 00:12:08.011 "product_name": "Raid Volume", 00:12:08.011 "block_size": 512, 00:12:08.011 "num_blocks": 253952, 00:12:08.011 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:08.011 "assigned_rate_limits": { 00:12:08.011 "rw_ios_per_sec": 0, 00:12:08.011 "rw_mbytes_per_sec": 0, 00:12:08.011 "r_mbytes_per_sec": 0, 00:12:08.011 "w_mbytes_per_sec": 0 00:12:08.011 }, 00:12:08.011 "claimed": false, 00:12:08.011 "zoned": false, 00:12:08.011 "supported_io_types": { 00:12:08.011 "read": true, 00:12:08.011 "write": true, 00:12:08.011 "unmap": true, 00:12:08.011 "flush": true, 00:12:08.011 "reset": true, 00:12:08.011 "nvme_admin": false, 00:12:08.011 "nvme_io": false, 00:12:08.011 "nvme_io_md": false, 00:12:08.011 "write_zeroes": true, 00:12:08.011 "zcopy": false, 00:12:08.011 "get_zone_info": false, 00:12:08.011 "zone_management": false, 00:12:08.011 "zone_append": false, 00:12:08.011 "compare": false, 00:12:08.011 "compare_and_write": false, 00:12:08.011 "abort": false, 00:12:08.011 "seek_hole": false, 00:12:08.011 "seek_data": false, 00:12:08.011 "copy": false, 00:12:08.011 "nvme_iov_md": false 00:12:08.011 }, 00:12:08.011 "memory_domains": [ 00:12:08.011 { 00:12:08.011 "dma_device_id": "system", 00:12:08.011 "dma_device_type": 1 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.011 "dma_device_type": 2 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "system", 00:12:08.011 "dma_device_type": 1 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.011 "dma_device_type": 2 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "system", 00:12:08.011 "dma_device_type": 1 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.011 "dma_device_type": 2 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "system", 00:12:08.011 "dma_device_type": 1 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.011 "dma_device_type": 2 00:12:08.011 } 00:12:08.011 ], 00:12:08.011 "driver_specific": { 00:12:08.011 "raid": { 00:12:08.011 "uuid": "546094a0-1ccc-4326-99a5-9565c3d91072", 00:12:08.011 "strip_size_kb": 64, 00:12:08.011 "state": "online", 00:12:08.011 "raid_level": "concat", 00:12:08.011 "superblock": true, 00:12:08.011 "num_base_bdevs": 4, 00:12:08.011 "num_base_bdevs_discovered": 4, 00:12:08.011 "num_base_bdevs_operational": 4, 00:12:08.011 "base_bdevs_list": [ 00:12:08.011 { 00:12:08.011 "name": "pt1", 00:12:08.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.011 "is_configured": true, 00:12:08.011 "data_offset": 2048, 00:12:08.011 "data_size": 63488 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "name": "pt2", 00:12:08.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.011 "is_configured": true, 00:12:08.011 "data_offset": 2048, 00:12:08.011 "data_size": 63488 00:12:08.011 }, 00:12:08.011 { 00:12:08.011 "name": "pt3", 00:12:08.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.011 "is_configured": true, 00:12:08.011 "data_offset": 2048, 00:12:08.011 "data_size": 63488 00:12:08.011 }, 00:12:08.011 { 00:12:08.012 "name": "pt4", 00:12:08.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.012 "is_configured": true, 00:12:08.012 "data_offset": 2048, 00:12:08.012 "data_size": 63488 00:12:08.012 } 00:12:08.012 ] 00:12:08.012 } 00:12:08.012 } 00:12:08.012 }' 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:08.012 pt2 00:12:08.012 pt3 00:12:08.012 pt4' 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.012 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.271 [2024-11-08 16:53:37.703700] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 546094a0-1ccc-4326-99a5-9565c3d91072 '!=' 546094a0-1ccc-4326-99a5-9565c3d91072 ']' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83474 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83474 ']' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83474 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83474 00:12:08.271 killing process with pid 83474 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83474' 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83474 00:12:08.271 [2024-11-08 16:53:37.776860] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.271 16:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83474 00:12:08.271 [2024-11-08 16:53:37.776999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.271 [2024-11-08 16:53:37.777088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.271 [2024-11-08 16:53:37.777113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:08.576 [2024-11-08 16:53:37.825779] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.866 16:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:08.866 00:12:08.866 real 0m4.398s 00:12:08.866 user 0m6.909s 00:12:08.866 sys 0m0.986s 00:12:08.866 16:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.866 16:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.866 ************************************ 00:12:08.866 END TEST raid_superblock_test 00:12:08.866 ************************************ 00:12:08.866 16:53:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:08.866 16:53:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:08.866 16:53:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.866 16:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.866 ************************************ 00:12:08.866 START TEST raid_read_error_test 00:12:08.866 ************************************ 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qAZepoTkNO 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83728 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83728 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83728 ']' 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.866 16:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.866 [2024-11-08 16:53:38.278709] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:08.866 [2024-11-08 16:53:38.278863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83728 ] 00:12:09.124 [2024-11-08 16:53:38.434500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.124 [2024-11-08 16:53:38.493045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.124 [2024-11-08 16:53:38.541938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.124 [2024-11-08 16:53:38.542006] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.060 BaseBdev1_malloc 00:12:10.060 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 true 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 [2024-11-08 16:53:39.300312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.061 [2024-11-08 16:53:39.300395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.061 [2024-11-08 16:53:39.300436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.061 [2024-11-08 16:53:39.300452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.061 [2024-11-08 16:53:39.303797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.061 [2024-11-08 16:53:39.303862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.061 BaseBdev1 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 BaseBdev2_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 true 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 [2024-11-08 16:53:39.343981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:10.061 [2024-11-08 16:53:39.344071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.061 [2024-11-08 16:53:39.344109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:10.061 [2024-11-08 16:53:39.344126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.061 [2024-11-08 16:53:39.347314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.061 [2024-11-08 16:53:39.347381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.061 BaseBdev2 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 BaseBdev3_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 true 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 [2024-11-08 16:53:39.382083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:10.061 [2024-11-08 16:53:39.382165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.061 [2024-11-08 16:53:39.382201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:10.061 [2024-11-08 16:53:39.382214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.061 [2024-11-08 16:53:39.384880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.061 [2024-11-08 16:53:39.384931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:10.061 BaseBdev3 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 BaseBdev4_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 true 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 [2024-11-08 16:53:39.420192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:10.061 [2024-11-08 16:53:39.420280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.061 [2024-11-08 16:53:39.420325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:10.061 [2024-11-08 16:53:39.420342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.061 [2024-11-08 16:53:39.423672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.061 [2024-11-08 16:53:39.423745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:10.061 BaseBdev4 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 [2024-11-08 16:53:39.432247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.061 [2024-11-08 16:53:39.434783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.061 [2024-11-08 16:53:39.434905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.061 [2024-11-08 16:53:39.434976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.061 [2024-11-08 16:53:39.435262] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:10.061 [2024-11-08 16:53:39.435291] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:10.061 [2024-11-08 16:53:39.435671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:10.061 [2024-11-08 16:53:39.435927] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:10.061 [2024-11-08 16:53:39.435963] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:10.061 [2024-11-08 16:53:39.436290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.061 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.061 "name": "raid_bdev1", 00:12:10.061 "uuid": "e2b0f41a-1cc0-4f05-88ca-93f47aa9730d", 00:12:10.061 "strip_size_kb": 64, 00:12:10.061 "state": "online", 00:12:10.061 "raid_level": "concat", 00:12:10.061 "superblock": true, 00:12:10.061 "num_base_bdevs": 4, 00:12:10.061 "num_base_bdevs_discovered": 4, 00:12:10.061 "num_base_bdevs_operational": 4, 00:12:10.061 "base_bdevs_list": [ 00:12:10.061 { 00:12:10.061 "name": "BaseBdev1", 00:12:10.061 "uuid": "28b735fe-422a-59c9-b49d-dca77055fbd1", 00:12:10.062 "is_configured": true, 00:12:10.062 "data_offset": 2048, 00:12:10.062 "data_size": 63488 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "name": "BaseBdev2", 00:12:10.062 "uuid": "5f680242-4376-5342-8107-96d60bb9b077", 00:12:10.062 "is_configured": true, 00:12:10.062 "data_offset": 2048, 00:12:10.062 "data_size": 63488 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "name": "BaseBdev3", 00:12:10.062 "uuid": "d3cf4d8f-be6a-5a82-8eac-020374442c8f", 00:12:10.062 "is_configured": true, 00:12:10.062 "data_offset": 2048, 00:12:10.062 "data_size": 63488 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "name": "BaseBdev4", 00:12:10.062 "uuid": "53cddaa0-9c04-5c05-b692-49ba15c2f0bd", 00:12:10.062 "is_configured": true, 00:12:10.062 "data_offset": 2048, 00:12:10.062 "data_size": 63488 00:12:10.062 } 00:12:10.062 ] 00:12:10.062 }' 00:12:10.062 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.062 16:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.319 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:10.319 16:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:10.577 [2024-11-08 16:53:39.964200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.513 "name": "raid_bdev1", 00:12:11.513 "uuid": "e2b0f41a-1cc0-4f05-88ca-93f47aa9730d", 00:12:11.513 "strip_size_kb": 64, 00:12:11.513 "state": "online", 00:12:11.513 "raid_level": "concat", 00:12:11.513 "superblock": true, 00:12:11.513 "num_base_bdevs": 4, 00:12:11.513 "num_base_bdevs_discovered": 4, 00:12:11.513 "num_base_bdevs_operational": 4, 00:12:11.513 "base_bdevs_list": [ 00:12:11.513 { 00:12:11.513 "name": "BaseBdev1", 00:12:11.513 "uuid": "28b735fe-422a-59c9-b49d-dca77055fbd1", 00:12:11.513 "is_configured": true, 00:12:11.513 "data_offset": 2048, 00:12:11.513 "data_size": 63488 00:12:11.513 }, 00:12:11.513 { 00:12:11.513 "name": "BaseBdev2", 00:12:11.513 "uuid": "5f680242-4376-5342-8107-96d60bb9b077", 00:12:11.513 "is_configured": true, 00:12:11.513 "data_offset": 2048, 00:12:11.513 "data_size": 63488 00:12:11.513 }, 00:12:11.513 { 00:12:11.513 "name": "BaseBdev3", 00:12:11.513 "uuid": "d3cf4d8f-be6a-5a82-8eac-020374442c8f", 00:12:11.513 "is_configured": true, 00:12:11.513 "data_offset": 2048, 00:12:11.513 "data_size": 63488 00:12:11.513 }, 00:12:11.513 { 00:12:11.513 "name": "BaseBdev4", 00:12:11.513 "uuid": "53cddaa0-9c04-5c05-b692-49ba15c2f0bd", 00:12:11.513 "is_configured": true, 00:12:11.513 "data_offset": 2048, 00:12:11.513 "data_size": 63488 00:12:11.513 } 00:12:11.513 ] 00:12:11.513 }' 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.513 16:53:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 [2024-11-08 16:53:41.269606] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.771 [2024-11-08 16:53:41.269683] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.771 [2024-11-08 16:53:41.272805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.771 [2024-11-08 16:53:41.272873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.771 [2024-11-08 16:53:41.272927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.771 [2024-11-08 16:53:41.272945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:11.771 { 00:12:11.771 "results": [ 00:12:11.771 { 00:12:11.771 "job": "raid_bdev1", 00:12:11.771 "core_mask": "0x1", 00:12:11.771 "workload": "randrw", 00:12:11.771 "percentage": 50, 00:12:11.771 "status": "finished", 00:12:11.771 "queue_depth": 1, 00:12:11.771 "io_size": 131072, 00:12:11.771 "runtime": 1.305557, 00:12:11.771 "iops": 12829.007082800674, 00:12:11.771 "mibps": 1603.6258853500842, 00:12:11.771 "io_failed": 1, 00:12:11.771 "io_timeout": 0, 00:12:11.771 "avg_latency_us": 108.09845354884963, 00:12:11.771 "min_latency_us": 28.618340611353712, 00:12:11.771 "max_latency_us": 1802.955458515284 00:12:11.771 } 00:12:11.771 ], 00:12:11.771 "core_count": 1 00:12:11.771 } 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83728 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83728 ']' 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83728 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.771 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83728 00:12:12.030 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.030 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.030 killing process with pid 83728 00:12:12.030 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83728' 00:12:12.030 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83728 00:12:12.031 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83728 00:12:12.031 [2024-11-08 16:53:41.311199] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.031 [2024-11-08 16:53:41.351652] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qAZepoTkNO 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:12:12.289 00:12:12.289 real 0m3.438s 00:12:12.289 user 0m4.369s 00:12:12.289 sys 0m0.557s 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.289 16:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.289 ************************************ 00:12:12.289 END TEST raid_read_error_test 00:12:12.289 ************************************ 00:12:12.289 16:53:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:12.289 16:53:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:12.289 16:53:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.289 16:53:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.289 ************************************ 00:12:12.289 START TEST raid_write_error_test 00:12:12.289 ************************************ 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.289 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oSJDZj7sIb 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83862 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83862 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83862 ']' 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.290 16:53:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.290 [2024-11-08 16:53:41.764845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:12.290 [2024-11-08 16:53:41.764982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83862 ] 00:12:12.548 [2024-11-08 16:53:41.930733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.548 [2024-11-08 16:53:41.983598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.548 [2024-11-08 16:53:42.027061] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.548 [2024-11-08 16:53:42.027143] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.119 BaseBdev1_malloc 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.119 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 true 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-11-08 16:53:42.654447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.379 [2024-11-08 16:53:42.654507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.379 [2024-11-08 16:53:42.654533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.379 [2024-11-08 16:53:42.654544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.379 [2024-11-08 16:53:42.657104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.379 [2024-11-08 16:53:42.657145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.379 BaseBdev1 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 BaseBdev2_malloc 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 true 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-11-08 16:53:42.709917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.379 [2024-11-08 16:53:42.709982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.379 [2024-11-08 16:53:42.710006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.379 [2024-11-08 16:53:42.710018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.379 [2024-11-08 16:53:42.712520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.379 [2024-11-08 16:53:42.712564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.379 BaseBdev2 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 BaseBdev3_malloc 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 true 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-11-08 16:53:42.751052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.380 [2024-11-08 16:53:42.751116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.380 [2024-11-08 16:53:42.751158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.380 [2024-11-08 16:53:42.751169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.380 [2024-11-08 16:53:42.753489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.380 [2024-11-08 16:53:42.753529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.380 BaseBdev3 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.380 BaseBdev4_malloc 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.380 true 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.380 [2024-11-08 16:53:42.791970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:13.380 [2024-11-08 16:53:42.792029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.380 [2024-11-08 16:53:42.792054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.380 [2024-11-08 16:53:42.792063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.380 [2024-11-08 16:53:42.794202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.380 [2024-11-08 16:53:42.794241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:13.380 BaseBdev4 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.380 [2024-11-08 16:53:42.804042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.380 [2024-11-08 16:53:42.806114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.380 [2024-11-08 16:53:42.806213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.380 [2024-11-08 16:53:42.806283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.380 [2024-11-08 16:53:42.806535] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:13.380 [2024-11-08 16:53:42.806556] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:13.380 [2024-11-08 16:53:42.806890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:13.380 [2024-11-08 16:53:42.807077] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:13.380 [2024-11-08 16:53:42.807109] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:13.380 [2024-11-08 16:53:42.807279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.380 "name": "raid_bdev1", 00:12:13.380 "uuid": "18e3a6e1-ef35-4958-a844-515c9d0aa3cd", 00:12:13.380 "strip_size_kb": 64, 00:12:13.380 "state": "online", 00:12:13.380 "raid_level": "concat", 00:12:13.380 "superblock": true, 00:12:13.380 "num_base_bdevs": 4, 00:12:13.380 "num_base_bdevs_discovered": 4, 00:12:13.380 "num_base_bdevs_operational": 4, 00:12:13.380 "base_bdevs_list": [ 00:12:13.380 { 00:12:13.380 "name": "BaseBdev1", 00:12:13.380 "uuid": "3c655264-7995-5f84-8cbb-eadd1f5fd3d5", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 }, 00:12:13.380 { 00:12:13.380 "name": "BaseBdev2", 00:12:13.380 "uuid": "eb9980d2-c3eb-56fc-b55c-0b5c1c411917", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 }, 00:12:13.380 { 00:12:13.380 "name": "BaseBdev3", 00:12:13.380 "uuid": "0ddf7374-d47a-5e89-bdeb-9de2ffff4634", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 }, 00:12:13.380 { 00:12:13.380 "name": "BaseBdev4", 00:12:13.380 "uuid": "09c6d8c4-8916-51f5-a6a7-2157263c3020", 00:12:13.380 "is_configured": true, 00:12:13.380 "data_offset": 2048, 00:12:13.380 "data_size": 63488 00:12:13.380 } 00:12:13.380 ] 00:12:13.380 }' 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.380 16:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.947 16:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:13.947 16:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:13.947 [2024-11-08 16:53:43.343521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.884 "name": "raid_bdev1", 00:12:14.884 "uuid": "18e3a6e1-ef35-4958-a844-515c9d0aa3cd", 00:12:14.884 "strip_size_kb": 64, 00:12:14.884 "state": "online", 00:12:14.884 "raid_level": "concat", 00:12:14.884 "superblock": true, 00:12:14.884 "num_base_bdevs": 4, 00:12:14.884 "num_base_bdevs_discovered": 4, 00:12:14.884 "num_base_bdevs_operational": 4, 00:12:14.884 "base_bdevs_list": [ 00:12:14.884 { 00:12:14.884 "name": "BaseBdev1", 00:12:14.884 "uuid": "3c655264-7995-5f84-8cbb-eadd1f5fd3d5", 00:12:14.884 "is_configured": true, 00:12:14.884 "data_offset": 2048, 00:12:14.884 "data_size": 63488 00:12:14.884 }, 00:12:14.884 { 00:12:14.884 "name": "BaseBdev2", 00:12:14.884 "uuid": "eb9980d2-c3eb-56fc-b55c-0b5c1c411917", 00:12:14.884 "is_configured": true, 00:12:14.884 "data_offset": 2048, 00:12:14.884 "data_size": 63488 00:12:14.884 }, 00:12:14.884 { 00:12:14.884 "name": "BaseBdev3", 00:12:14.884 "uuid": "0ddf7374-d47a-5e89-bdeb-9de2ffff4634", 00:12:14.884 "is_configured": true, 00:12:14.884 "data_offset": 2048, 00:12:14.884 "data_size": 63488 00:12:14.884 }, 00:12:14.884 { 00:12:14.884 "name": "BaseBdev4", 00:12:14.884 "uuid": "09c6d8c4-8916-51f5-a6a7-2157263c3020", 00:12:14.884 "is_configured": true, 00:12:14.884 "data_offset": 2048, 00:12:14.884 "data_size": 63488 00:12:14.884 } 00:12:14.884 ] 00:12:14.884 }' 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.884 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.453 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.453 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.453 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.453 [2024-11-08 16:53:44.711905] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.453 [2024-11-08 16:53:44.712015] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.453 [2024-11-08 16:53:44.715039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.453 [2024-11-08 16:53:44.715182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.453 [2024-11-08 16:53:44.715262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.453 [2024-11-08 16:53:44.715340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:15.453 { 00:12:15.453 "results": [ 00:12:15.453 { 00:12:15.453 "job": "raid_bdev1", 00:12:15.453 "core_mask": "0x1", 00:12:15.453 "workload": "randrw", 00:12:15.453 "percentage": 50, 00:12:15.453 "status": "finished", 00:12:15.453 "queue_depth": 1, 00:12:15.453 "io_size": 131072, 00:12:15.453 "runtime": 1.369195, 00:12:15.453 "iops": 15147.586720664332, 00:12:15.453 "mibps": 1893.4483400830416, 00:12:15.453 "io_failed": 1, 00:12:15.453 "io_timeout": 0, 00:12:15.453 "avg_latency_us": 91.64969984350556, 00:12:15.453 "min_latency_us": 27.053275109170304, 00:12:15.454 "max_latency_us": 1502.46288209607 00:12:15.454 } 00:12:15.454 ], 00:12:15.454 "core_count": 1 00:12:15.454 } 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83862 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83862 ']' 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83862 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83862 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.454 killing process with pid 83862 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83862' 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83862 00:12:15.454 [2024-11-08 16:53:44.757542] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.454 16:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83862 00:12:15.454 [2024-11-08 16:53:44.794296] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oSJDZj7sIb 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:15.715 00:12:15.715 real 0m3.393s 00:12:15.715 user 0m4.246s 00:12:15.715 sys 0m0.586s 00:12:15.715 ************************************ 00:12:15.715 END TEST raid_write_error_test 00:12:15.715 ************************************ 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.715 16:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.715 16:53:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:15.715 16:53:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:15.715 16:53:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:15.715 16:53:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.715 16:53:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.715 ************************************ 00:12:15.715 START TEST raid_state_function_test 00:12:15.715 ************************************ 00:12:15.715 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:12:15.715 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83989 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83989' 00:12:15.716 Process raid pid: 83989 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83989 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83989 ']' 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.716 16:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.716 [2024-11-08 16:53:45.225568] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:15.716 [2024-11-08 16:53:45.225719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.973 [2024-11-08 16:53:45.388275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.973 [2024-11-08 16:53:45.442584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.974 [2024-11-08 16:53:45.487428] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.974 [2024-11-08 16:53:45.487469] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.907 [2024-11-08 16:53:46.186205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.907 [2024-11-08 16:53:46.186298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.907 [2024-11-08 16:53:46.186322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.907 [2024-11-08 16:53:46.186340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.907 [2024-11-08 16:53:46.186354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.907 [2024-11-08 16:53:46.186373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.907 [2024-11-08 16:53:46.186384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:16.907 [2024-11-08 16:53:46.186397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.907 "name": "Existed_Raid", 00:12:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.907 "strip_size_kb": 0, 00:12:16.907 "state": "configuring", 00:12:16.907 "raid_level": "raid1", 00:12:16.907 "superblock": false, 00:12:16.907 "num_base_bdevs": 4, 00:12:16.907 "num_base_bdevs_discovered": 0, 00:12:16.907 "num_base_bdevs_operational": 4, 00:12:16.907 "base_bdevs_list": [ 00:12:16.907 { 00:12:16.907 "name": "BaseBdev1", 00:12:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.907 "is_configured": false, 00:12:16.907 "data_offset": 0, 00:12:16.907 "data_size": 0 00:12:16.907 }, 00:12:16.907 { 00:12:16.907 "name": "BaseBdev2", 00:12:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.907 "is_configured": false, 00:12:16.907 "data_offset": 0, 00:12:16.907 "data_size": 0 00:12:16.907 }, 00:12:16.907 { 00:12:16.907 "name": "BaseBdev3", 00:12:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.907 "is_configured": false, 00:12:16.907 "data_offset": 0, 00:12:16.907 "data_size": 0 00:12:16.907 }, 00:12:16.907 { 00:12:16.907 "name": "BaseBdev4", 00:12:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.907 "is_configured": false, 00:12:16.907 "data_offset": 0, 00:12:16.907 "data_size": 0 00:12:16.907 } 00:12:16.907 ] 00:12:16.907 }' 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.907 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.165 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-11-08 16:53:46.637791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.166 [2024-11-08 16:53:46.637840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-11-08 16:53:46.645959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.166 [2024-11-08 16:53:46.646066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.166 [2024-11-08 16:53:46.646107] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.166 [2024-11-08 16:53:46.646146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.166 [2024-11-08 16:53:46.646177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.166 [2024-11-08 16:53:46.646212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.166 [2024-11-08 16:53:46.646242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.166 [2024-11-08 16:53:46.646280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-11-08 16:53:46.664052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.166 BaseBdev1 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [ 00:12:17.166 { 00:12:17.166 "name": "BaseBdev1", 00:12:17.166 "aliases": [ 00:12:17.166 "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d" 00:12:17.166 ], 00:12:17.166 "product_name": "Malloc disk", 00:12:17.166 "block_size": 512, 00:12:17.166 "num_blocks": 65536, 00:12:17.166 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:17.166 "assigned_rate_limits": { 00:12:17.166 "rw_ios_per_sec": 0, 00:12:17.166 "rw_mbytes_per_sec": 0, 00:12:17.166 "r_mbytes_per_sec": 0, 00:12:17.166 "w_mbytes_per_sec": 0 00:12:17.166 }, 00:12:17.166 "claimed": true, 00:12:17.166 "claim_type": "exclusive_write", 00:12:17.166 "zoned": false, 00:12:17.426 "supported_io_types": { 00:12:17.426 "read": true, 00:12:17.426 "write": true, 00:12:17.426 "unmap": true, 00:12:17.426 "flush": true, 00:12:17.426 "reset": true, 00:12:17.426 "nvme_admin": false, 00:12:17.426 "nvme_io": false, 00:12:17.426 "nvme_io_md": false, 00:12:17.426 "write_zeroes": true, 00:12:17.426 "zcopy": true, 00:12:17.426 "get_zone_info": false, 00:12:17.426 "zone_management": false, 00:12:17.426 "zone_append": false, 00:12:17.426 "compare": false, 00:12:17.426 "compare_and_write": false, 00:12:17.426 "abort": true, 00:12:17.426 "seek_hole": false, 00:12:17.426 "seek_data": false, 00:12:17.426 "copy": true, 00:12:17.426 "nvme_iov_md": false 00:12:17.426 }, 00:12:17.426 "memory_domains": [ 00:12:17.426 { 00:12:17.426 "dma_device_id": "system", 00:12:17.426 "dma_device_type": 1 00:12:17.426 }, 00:12:17.426 { 00:12:17.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.426 "dma_device_type": 2 00:12:17.426 } 00:12:17.426 ], 00:12:17.426 "driver_specific": {} 00:12:17.426 } 00:12:17.426 ] 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.426 "name": "Existed_Raid", 00:12:17.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.426 "strip_size_kb": 0, 00:12:17.426 "state": "configuring", 00:12:17.426 "raid_level": "raid1", 00:12:17.426 "superblock": false, 00:12:17.426 "num_base_bdevs": 4, 00:12:17.426 "num_base_bdevs_discovered": 1, 00:12:17.426 "num_base_bdevs_operational": 4, 00:12:17.426 "base_bdevs_list": [ 00:12:17.426 { 00:12:17.426 "name": "BaseBdev1", 00:12:17.426 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:17.426 "is_configured": true, 00:12:17.426 "data_offset": 0, 00:12:17.426 "data_size": 65536 00:12:17.426 }, 00:12:17.426 { 00:12:17.426 "name": "BaseBdev2", 00:12:17.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.426 "is_configured": false, 00:12:17.426 "data_offset": 0, 00:12:17.426 "data_size": 0 00:12:17.426 }, 00:12:17.426 { 00:12:17.426 "name": "BaseBdev3", 00:12:17.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.426 "is_configured": false, 00:12:17.426 "data_offset": 0, 00:12:17.426 "data_size": 0 00:12:17.426 }, 00:12:17.426 { 00:12:17.426 "name": "BaseBdev4", 00:12:17.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.426 "is_configured": false, 00:12:17.426 "data_offset": 0, 00:12:17.426 "data_size": 0 00:12:17.426 } 00:12:17.426 ] 00:12:17.426 }' 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.426 16:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.685 [2024-11-08 16:53:47.151468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.685 [2024-11-08 16:53:47.151603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.685 [2024-11-08 16:53:47.163520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.685 [2024-11-08 16:53:47.165720] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.685 [2024-11-08 16:53:47.165822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.685 [2024-11-08 16:53:47.165861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.685 [2024-11-08 16:53:47.165896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.685 [2024-11-08 16:53:47.165925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.685 [2024-11-08 16:53:47.165961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.685 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.686 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.944 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.944 "name": "Existed_Raid", 00:12:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.944 "strip_size_kb": 0, 00:12:17.944 "state": "configuring", 00:12:17.944 "raid_level": "raid1", 00:12:17.944 "superblock": false, 00:12:17.944 "num_base_bdevs": 4, 00:12:17.944 "num_base_bdevs_discovered": 1, 00:12:17.944 "num_base_bdevs_operational": 4, 00:12:17.944 "base_bdevs_list": [ 00:12:17.944 { 00:12:17.944 "name": "BaseBdev1", 00:12:17.944 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:17.944 "is_configured": true, 00:12:17.944 "data_offset": 0, 00:12:17.944 "data_size": 65536 00:12:17.944 }, 00:12:17.944 { 00:12:17.944 "name": "BaseBdev2", 00:12:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.944 "is_configured": false, 00:12:17.944 "data_offset": 0, 00:12:17.944 "data_size": 0 00:12:17.944 }, 00:12:17.944 { 00:12:17.944 "name": "BaseBdev3", 00:12:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.944 "is_configured": false, 00:12:17.944 "data_offset": 0, 00:12:17.944 "data_size": 0 00:12:17.944 }, 00:12:17.944 { 00:12:17.944 "name": "BaseBdev4", 00:12:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.944 "is_configured": false, 00:12:17.944 "data_offset": 0, 00:12:17.944 "data_size": 0 00:12:17.944 } 00:12:17.944 ] 00:12:17.944 }' 00:12:17.944 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.944 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.202 [2024-11-08 16:53:47.618596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.202 BaseBdev2 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.202 [ 00:12:18.202 { 00:12:18.202 "name": "BaseBdev2", 00:12:18.202 "aliases": [ 00:12:18.202 "449c20bf-b26e-48e6-9b00-5c61190e3bdc" 00:12:18.202 ], 00:12:18.202 "product_name": "Malloc disk", 00:12:18.202 "block_size": 512, 00:12:18.202 "num_blocks": 65536, 00:12:18.202 "uuid": "449c20bf-b26e-48e6-9b00-5c61190e3bdc", 00:12:18.202 "assigned_rate_limits": { 00:12:18.202 "rw_ios_per_sec": 0, 00:12:18.202 "rw_mbytes_per_sec": 0, 00:12:18.202 "r_mbytes_per_sec": 0, 00:12:18.202 "w_mbytes_per_sec": 0 00:12:18.202 }, 00:12:18.202 "claimed": true, 00:12:18.202 "claim_type": "exclusive_write", 00:12:18.202 "zoned": false, 00:12:18.202 "supported_io_types": { 00:12:18.202 "read": true, 00:12:18.202 "write": true, 00:12:18.202 "unmap": true, 00:12:18.202 "flush": true, 00:12:18.202 "reset": true, 00:12:18.202 "nvme_admin": false, 00:12:18.202 "nvme_io": false, 00:12:18.202 "nvme_io_md": false, 00:12:18.202 "write_zeroes": true, 00:12:18.202 "zcopy": true, 00:12:18.202 "get_zone_info": false, 00:12:18.202 "zone_management": false, 00:12:18.202 "zone_append": false, 00:12:18.202 "compare": false, 00:12:18.202 "compare_and_write": false, 00:12:18.202 "abort": true, 00:12:18.202 "seek_hole": false, 00:12:18.202 "seek_data": false, 00:12:18.202 "copy": true, 00:12:18.202 "nvme_iov_md": false 00:12:18.202 }, 00:12:18.202 "memory_domains": [ 00:12:18.202 { 00:12:18.202 "dma_device_id": "system", 00:12:18.202 "dma_device_type": 1 00:12:18.202 }, 00:12:18.202 { 00:12:18.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.202 "dma_device_type": 2 00:12:18.202 } 00:12:18.202 ], 00:12:18.202 "driver_specific": {} 00:12:18.202 } 00:12:18.202 ] 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.202 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.202 "name": "Existed_Raid", 00:12:18.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.202 "strip_size_kb": 0, 00:12:18.202 "state": "configuring", 00:12:18.202 "raid_level": "raid1", 00:12:18.202 "superblock": false, 00:12:18.202 "num_base_bdevs": 4, 00:12:18.202 "num_base_bdevs_discovered": 2, 00:12:18.202 "num_base_bdevs_operational": 4, 00:12:18.202 "base_bdevs_list": [ 00:12:18.202 { 00:12:18.202 "name": "BaseBdev1", 00:12:18.202 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:18.202 "is_configured": true, 00:12:18.202 "data_offset": 0, 00:12:18.202 "data_size": 65536 00:12:18.202 }, 00:12:18.202 { 00:12:18.202 "name": "BaseBdev2", 00:12:18.202 "uuid": "449c20bf-b26e-48e6-9b00-5c61190e3bdc", 00:12:18.202 "is_configured": true, 00:12:18.202 "data_offset": 0, 00:12:18.203 "data_size": 65536 00:12:18.203 }, 00:12:18.203 { 00:12:18.203 "name": "BaseBdev3", 00:12:18.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.203 "is_configured": false, 00:12:18.203 "data_offset": 0, 00:12:18.203 "data_size": 0 00:12:18.203 }, 00:12:18.203 { 00:12:18.203 "name": "BaseBdev4", 00:12:18.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.203 "is_configured": false, 00:12:18.203 "data_offset": 0, 00:12:18.203 "data_size": 0 00:12:18.203 } 00:12:18.203 ] 00:12:18.203 }' 00:12:18.203 16:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.203 16:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 [2024-11-08 16:53:48.097256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.769 BaseBdev3 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 [ 00:12:18.769 { 00:12:18.769 "name": "BaseBdev3", 00:12:18.769 "aliases": [ 00:12:18.769 "acd13511-378c-4fa4-99f6-918ea1ad549f" 00:12:18.769 ], 00:12:18.769 "product_name": "Malloc disk", 00:12:18.769 "block_size": 512, 00:12:18.769 "num_blocks": 65536, 00:12:18.769 "uuid": "acd13511-378c-4fa4-99f6-918ea1ad549f", 00:12:18.769 "assigned_rate_limits": { 00:12:18.769 "rw_ios_per_sec": 0, 00:12:18.769 "rw_mbytes_per_sec": 0, 00:12:18.769 "r_mbytes_per_sec": 0, 00:12:18.769 "w_mbytes_per_sec": 0 00:12:18.769 }, 00:12:18.769 "claimed": true, 00:12:18.769 "claim_type": "exclusive_write", 00:12:18.769 "zoned": false, 00:12:18.769 "supported_io_types": { 00:12:18.769 "read": true, 00:12:18.769 "write": true, 00:12:18.769 "unmap": true, 00:12:18.769 "flush": true, 00:12:18.769 "reset": true, 00:12:18.769 "nvme_admin": false, 00:12:18.769 "nvme_io": false, 00:12:18.769 "nvme_io_md": false, 00:12:18.769 "write_zeroes": true, 00:12:18.769 "zcopy": true, 00:12:18.769 "get_zone_info": false, 00:12:18.769 "zone_management": false, 00:12:18.769 "zone_append": false, 00:12:18.769 "compare": false, 00:12:18.769 "compare_and_write": false, 00:12:18.769 "abort": true, 00:12:18.769 "seek_hole": false, 00:12:18.769 "seek_data": false, 00:12:18.769 "copy": true, 00:12:18.769 "nvme_iov_md": false 00:12:18.769 }, 00:12:18.769 "memory_domains": [ 00:12:18.769 { 00:12:18.769 "dma_device_id": "system", 00:12:18.769 "dma_device_type": 1 00:12:18.769 }, 00:12:18.769 { 00:12:18.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.769 "dma_device_type": 2 00:12:18.769 } 00:12:18.769 ], 00:12:18.769 "driver_specific": {} 00:12:18.769 } 00:12:18.769 ] 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.769 "name": "Existed_Raid", 00:12:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.769 "strip_size_kb": 0, 00:12:18.769 "state": "configuring", 00:12:18.769 "raid_level": "raid1", 00:12:18.769 "superblock": false, 00:12:18.769 "num_base_bdevs": 4, 00:12:18.769 "num_base_bdevs_discovered": 3, 00:12:18.769 "num_base_bdevs_operational": 4, 00:12:18.769 "base_bdevs_list": [ 00:12:18.769 { 00:12:18.769 "name": "BaseBdev1", 00:12:18.769 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:18.769 "is_configured": true, 00:12:18.769 "data_offset": 0, 00:12:18.769 "data_size": 65536 00:12:18.769 }, 00:12:18.769 { 00:12:18.769 "name": "BaseBdev2", 00:12:18.769 "uuid": "449c20bf-b26e-48e6-9b00-5c61190e3bdc", 00:12:18.769 "is_configured": true, 00:12:18.769 "data_offset": 0, 00:12:18.769 "data_size": 65536 00:12:18.769 }, 00:12:18.769 { 00:12:18.769 "name": "BaseBdev3", 00:12:18.769 "uuid": "acd13511-378c-4fa4-99f6-918ea1ad549f", 00:12:18.769 "is_configured": true, 00:12:18.769 "data_offset": 0, 00:12:18.769 "data_size": 65536 00:12:18.769 }, 00:12:18.769 { 00:12:18.769 "name": "BaseBdev4", 00:12:18.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.769 "is_configured": false, 00:12:18.769 "data_offset": 0, 00:12:18.769 "data_size": 0 00:12:18.769 } 00:12:18.769 ] 00:12:18.769 }' 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.769 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.335 [2024-11-08 16:53:48.596112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:19.335 [2024-11-08 16:53:48.596283] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:19.335 [2024-11-08 16:53:48.596319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:19.335 [2024-11-08 16:53:48.596727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:19.335 [2024-11-08 16:53:48.596951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:19.335 [2024-11-08 16:53:48.597011] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:12:19.335 id_bdev 0x617000006980 00:12:19.335 [2024-11-08 16:53:48.597288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.335 [ 00:12:19.335 { 00:12:19.335 "name": "BaseBdev4", 00:12:19.335 "aliases": [ 00:12:19.335 "bbfb14ef-03db-4838-b0ec-9cc1692c0d56" 00:12:19.335 ], 00:12:19.335 "product_name": "Malloc disk", 00:12:19.335 "block_size": 512, 00:12:19.335 "num_blocks": 65536, 00:12:19.335 "uuid": "bbfb14ef-03db-4838-b0ec-9cc1692c0d56", 00:12:19.335 "assigned_rate_limits": { 00:12:19.335 "rw_ios_per_sec": 0, 00:12:19.335 "rw_mbytes_per_sec": 0, 00:12:19.335 "r_mbytes_per_sec": 0, 00:12:19.335 "w_mbytes_per_sec": 0 00:12:19.335 }, 00:12:19.335 "claimed": true, 00:12:19.335 "claim_type": "exclusive_write", 00:12:19.335 "zoned": false, 00:12:19.335 "supported_io_types": { 00:12:19.335 "read": true, 00:12:19.335 "write": true, 00:12:19.335 "unmap": true, 00:12:19.335 "flush": true, 00:12:19.335 "reset": true, 00:12:19.335 "nvme_admin": false, 00:12:19.335 "nvme_io": false, 00:12:19.335 "nvme_io_md": false, 00:12:19.335 "write_zeroes": true, 00:12:19.335 "zcopy": true, 00:12:19.335 "get_zone_info": false, 00:12:19.335 "zone_management": false, 00:12:19.335 "zone_append": false, 00:12:19.335 "compare": false, 00:12:19.335 "compare_and_write": false, 00:12:19.335 "abort": true, 00:12:19.335 "seek_hole": false, 00:12:19.335 "seek_data": false, 00:12:19.335 "copy": true, 00:12:19.335 "nvme_iov_md": false 00:12:19.335 }, 00:12:19.335 "memory_domains": [ 00:12:19.335 { 00:12:19.335 "dma_device_id": "system", 00:12:19.335 "dma_device_type": 1 00:12:19.335 }, 00:12:19.335 { 00:12:19.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.335 "dma_device_type": 2 00:12:19.335 } 00:12:19.335 ], 00:12:19.335 "driver_specific": {} 00:12:19.335 } 00:12:19.335 ] 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.335 "name": "Existed_Raid", 00:12:19.335 "uuid": "39977846-4f4f-4c18-8236-f71fffe83894", 00:12:19.335 "strip_size_kb": 0, 00:12:19.335 "state": "online", 00:12:19.335 "raid_level": "raid1", 00:12:19.335 "superblock": false, 00:12:19.335 "num_base_bdevs": 4, 00:12:19.335 "num_base_bdevs_discovered": 4, 00:12:19.335 "num_base_bdevs_operational": 4, 00:12:19.335 "base_bdevs_list": [ 00:12:19.335 { 00:12:19.335 "name": "BaseBdev1", 00:12:19.335 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:19.335 "is_configured": true, 00:12:19.335 "data_offset": 0, 00:12:19.335 "data_size": 65536 00:12:19.335 }, 00:12:19.335 { 00:12:19.335 "name": "BaseBdev2", 00:12:19.335 "uuid": "449c20bf-b26e-48e6-9b00-5c61190e3bdc", 00:12:19.335 "is_configured": true, 00:12:19.335 "data_offset": 0, 00:12:19.335 "data_size": 65536 00:12:19.335 }, 00:12:19.335 { 00:12:19.335 "name": "BaseBdev3", 00:12:19.335 "uuid": "acd13511-378c-4fa4-99f6-918ea1ad549f", 00:12:19.335 "is_configured": true, 00:12:19.335 "data_offset": 0, 00:12:19.335 "data_size": 65536 00:12:19.335 }, 00:12:19.335 { 00:12:19.335 "name": "BaseBdev4", 00:12:19.335 "uuid": "bbfb14ef-03db-4838-b0ec-9cc1692c0d56", 00:12:19.335 "is_configured": true, 00:12:19.335 "data_offset": 0, 00:12:19.335 "data_size": 65536 00:12:19.335 } 00:12:19.335 ] 00:12:19.335 }' 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.335 16:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.594 [2024-11-08 16:53:49.103802] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.594 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.853 "name": "Existed_Raid", 00:12:19.853 "aliases": [ 00:12:19.853 "39977846-4f4f-4c18-8236-f71fffe83894" 00:12:19.853 ], 00:12:19.853 "product_name": "Raid Volume", 00:12:19.853 "block_size": 512, 00:12:19.853 "num_blocks": 65536, 00:12:19.853 "uuid": "39977846-4f4f-4c18-8236-f71fffe83894", 00:12:19.853 "assigned_rate_limits": { 00:12:19.853 "rw_ios_per_sec": 0, 00:12:19.853 "rw_mbytes_per_sec": 0, 00:12:19.853 "r_mbytes_per_sec": 0, 00:12:19.853 "w_mbytes_per_sec": 0 00:12:19.853 }, 00:12:19.853 "claimed": false, 00:12:19.853 "zoned": false, 00:12:19.853 "supported_io_types": { 00:12:19.853 "read": true, 00:12:19.853 "write": true, 00:12:19.853 "unmap": false, 00:12:19.853 "flush": false, 00:12:19.853 "reset": true, 00:12:19.853 "nvme_admin": false, 00:12:19.853 "nvme_io": false, 00:12:19.853 "nvme_io_md": false, 00:12:19.853 "write_zeroes": true, 00:12:19.853 "zcopy": false, 00:12:19.853 "get_zone_info": false, 00:12:19.853 "zone_management": false, 00:12:19.853 "zone_append": false, 00:12:19.853 "compare": false, 00:12:19.853 "compare_and_write": false, 00:12:19.853 "abort": false, 00:12:19.853 "seek_hole": false, 00:12:19.853 "seek_data": false, 00:12:19.853 "copy": false, 00:12:19.853 "nvme_iov_md": false 00:12:19.853 }, 00:12:19.853 "memory_domains": [ 00:12:19.853 { 00:12:19.853 "dma_device_id": "system", 00:12:19.853 "dma_device_type": 1 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.853 "dma_device_type": 2 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "system", 00:12:19.853 "dma_device_type": 1 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.853 "dma_device_type": 2 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "system", 00:12:19.853 "dma_device_type": 1 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.853 "dma_device_type": 2 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "system", 00:12:19.853 "dma_device_type": 1 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.853 "dma_device_type": 2 00:12:19.853 } 00:12:19.853 ], 00:12:19.853 "driver_specific": { 00:12:19.853 "raid": { 00:12:19.853 "uuid": "39977846-4f4f-4c18-8236-f71fffe83894", 00:12:19.853 "strip_size_kb": 0, 00:12:19.853 "state": "online", 00:12:19.853 "raid_level": "raid1", 00:12:19.853 "superblock": false, 00:12:19.853 "num_base_bdevs": 4, 00:12:19.853 "num_base_bdevs_discovered": 4, 00:12:19.853 "num_base_bdevs_operational": 4, 00:12:19.853 "base_bdevs_list": [ 00:12:19.853 { 00:12:19.853 "name": "BaseBdev1", 00:12:19.853 "uuid": "5600746d-e40b-42ff-8aa7-c8eaff2d5d2d", 00:12:19.853 "is_configured": true, 00:12:19.853 "data_offset": 0, 00:12:19.853 "data_size": 65536 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "name": "BaseBdev2", 00:12:19.853 "uuid": "449c20bf-b26e-48e6-9b00-5c61190e3bdc", 00:12:19.853 "is_configured": true, 00:12:19.853 "data_offset": 0, 00:12:19.853 "data_size": 65536 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "name": "BaseBdev3", 00:12:19.853 "uuid": "acd13511-378c-4fa4-99f6-918ea1ad549f", 00:12:19.853 "is_configured": true, 00:12:19.853 "data_offset": 0, 00:12:19.853 "data_size": 65536 00:12:19.853 }, 00:12:19.853 { 00:12:19.853 "name": "BaseBdev4", 00:12:19.853 "uuid": "bbfb14ef-03db-4838-b0ec-9cc1692c0d56", 00:12:19.853 "is_configured": true, 00:12:19.853 "data_offset": 0, 00:12:19.853 "data_size": 65536 00:12:19.853 } 00:12:19.853 ] 00:12:19.853 } 00:12:19.853 } 00:12:19.853 }' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:19.853 BaseBdev2 00:12:19.853 BaseBdev3 00:12:19.853 BaseBdev4' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.853 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.854 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.114 [2024-11-08 16:53:49.447333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.114 "name": "Existed_Raid", 00:12:20.114 "uuid": "39977846-4f4f-4c18-8236-f71fffe83894", 00:12:20.114 "strip_size_kb": 0, 00:12:20.114 "state": "online", 00:12:20.114 "raid_level": "raid1", 00:12:20.114 "superblock": false, 00:12:20.114 "num_base_bdevs": 4, 00:12:20.114 "num_base_bdevs_discovered": 3, 00:12:20.114 "num_base_bdevs_operational": 3, 00:12:20.114 "base_bdevs_list": [ 00:12:20.114 { 00:12:20.114 "name": null, 00:12:20.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.114 "is_configured": false, 00:12:20.114 "data_offset": 0, 00:12:20.114 "data_size": 65536 00:12:20.114 }, 00:12:20.114 { 00:12:20.114 "name": "BaseBdev2", 00:12:20.114 "uuid": "449c20bf-b26e-48e6-9b00-5c61190e3bdc", 00:12:20.114 "is_configured": true, 00:12:20.114 "data_offset": 0, 00:12:20.114 "data_size": 65536 00:12:20.114 }, 00:12:20.114 { 00:12:20.114 "name": "BaseBdev3", 00:12:20.114 "uuid": "acd13511-378c-4fa4-99f6-918ea1ad549f", 00:12:20.114 "is_configured": true, 00:12:20.114 "data_offset": 0, 00:12:20.114 "data_size": 65536 00:12:20.114 }, 00:12:20.114 { 00:12:20.114 "name": "BaseBdev4", 00:12:20.114 "uuid": "bbfb14ef-03db-4838-b0ec-9cc1692c0d56", 00:12:20.114 "is_configured": true, 00:12:20.114 "data_offset": 0, 00:12:20.114 "data_size": 65536 00:12:20.114 } 00:12:20.114 ] 00:12:20.114 }' 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.114 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.685 16:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.685 [2024-11-08 16:53:50.031322] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.685 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.686 [2024-11-08 16:53:50.103202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.686 [2024-11-08 16:53:50.175519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:20.686 [2024-11-08 16:53:50.175817] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.686 [2024-11-08 16:53:50.189331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.686 [2024-11-08 16:53:50.189389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.686 [2024-11-08 16:53:50.189402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.686 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.955 BaseBdev2 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.955 [ 00:12:20.955 { 00:12:20.955 "name": "BaseBdev2", 00:12:20.955 "aliases": [ 00:12:20.955 "53f260c1-007e-4415-9a56-07f3fcab1d70" 00:12:20.955 ], 00:12:20.955 "product_name": "Malloc disk", 00:12:20.955 "block_size": 512, 00:12:20.955 "num_blocks": 65536, 00:12:20.955 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:20.955 "assigned_rate_limits": { 00:12:20.955 "rw_ios_per_sec": 0, 00:12:20.955 "rw_mbytes_per_sec": 0, 00:12:20.955 "r_mbytes_per_sec": 0, 00:12:20.955 "w_mbytes_per_sec": 0 00:12:20.955 }, 00:12:20.955 "claimed": false, 00:12:20.955 "zoned": false, 00:12:20.955 "supported_io_types": { 00:12:20.955 "read": true, 00:12:20.955 "write": true, 00:12:20.955 "unmap": true, 00:12:20.955 "flush": true, 00:12:20.955 "reset": true, 00:12:20.955 "nvme_admin": false, 00:12:20.955 "nvme_io": false, 00:12:20.955 "nvme_io_md": false, 00:12:20.955 "write_zeroes": true, 00:12:20.955 "zcopy": true, 00:12:20.955 "get_zone_info": false, 00:12:20.955 "zone_management": false, 00:12:20.955 "zone_append": false, 00:12:20.955 "compare": false, 00:12:20.955 "compare_and_write": false, 00:12:20.955 "abort": true, 00:12:20.955 "seek_hole": false, 00:12:20.955 "seek_data": false, 00:12:20.955 "copy": true, 00:12:20.955 "nvme_iov_md": false 00:12:20.955 }, 00:12:20.955 "memory_domains": [ 00:12:20.955 { 00:12:20.955 "dma_device_id": "system", 00:12:20.955 "dma_device_type": 1 00:12:20.955 }, 00:12:20.955 { 00:12:20.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.955 "dma_device_type": 2 00:12:20.955 } 00:12:20.955 ], 00:12:20.955 "driver_specific": {} 00:12:20.955 } 00:12:20.955 ] 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.955 BaseBdev3 00:12:20.955 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 [ 00:12:20.956 { 00:12:20.956 "name": "BaseBdev3", 00:12:20.956 "aliases": [ 00:12:20.956 "8d416099-3572-47a6-a47d-3ad66815e1bb" 00:12:20.956 ], 00:12:20.956 "product_name": "Malloc disk", 00:12:20.956 "block_size": 512, 00:12:20.956 "num_blocks": 65536, 00:12:20.956 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:20.956 "assigned_rate_limits": { 00:12:20.956 "rw_ios_per_sec": 0, 00:12:20.956 "rw_mbytes_per_sec": 0, 00:12:20.956 "r_mbytes_per_sec": 0, 00:12:20.956 "w_mbytes_per_sec": 0 00:12:20.956 }, 00:12:20.956 "claimed": false, 00:12:20.956 "zoned": false, 00:12:20.956 "supported_io_types": { 00:12:20.956 "read": true, 00:12:20.956 "write": true, 00:12:20.956 "unmap": true, 00:12:20.956 "flush": true, 00:12:20.956 "reset": true, 00:12:20.956 "nvme_admin": false, 00:12:20.956 "nvme_io": false, 00:12:20.956 "nvme_io_md": false, 00:12:20.956 "write_zeroes": true, 00:12:20.956 "zcopy": true, 00:12:20.956 "get_zone_info": false, 00:12:20.956 "zone_management": false, 00:12:20.956 "zone_append": false, 00:12:20.956 "compare": false, 00:12:20.956 "compare_and_write": false, 00:12:20.956 "abort": true, 00:12:20.956 "seek_hole": false, 00:12:20.956 "seek_data": false, 00:12:20.956 "copy": true, 00:12:20.956 "nvme_iov_md": false 00:12:20.956 }, 00:12:20.956 "memory_domains": [ 00:12:20.956 { 00:12:20.956 "dma_device_id": "system", 00:12:20.956 "dma_device_type": 1 00:12:20.956 }, 00:12:20.956 { 00:12:20.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.956 "dma_device_type": 2 00:12:20.956 } 00:12:20.956 ], 00:12:20.956 "driver_specific": {} 00:12:20.956 } 00:12:20.956 ] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 BaseBdev4 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 [ 00:12:20.956 { 00:12:20.956 "name": "BaseBdev4", 00:12:20.956 "aliases": [ 00:12:20.956 "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6" 00:12:20.956 ], 00:12:20.956 "product_name": "Malloc disk", 00:12:20.956 "block_size": 512, 00:12:20.956 "num_blocks": 65536, 00:12:20.956 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:20.956 "assigned_rate_limits": { 00:12:20.956 "rw_ios_per_sec": 0, 00:12:20.956 "rw_mbytes_per_sec": 0, 00:12:20.956 "r_mbytes_per_sec": 0, 00:12:20.956 "w_mbytes_per_sec": 0 00:12:20.956 }, 00:12:20.956 "claimed": false, 00:12:20.956 "zoned": false, 00:12:20.956 "supported_io_types": { 00:12:20.956 "read": true, 00:12:20.956 "write": true, 00:12:20.956 "unmap": true, 00:12:20.956 "flush": true, 00:12:20.956 "reset": true, 00:12:20.956 "nvme_admin": false, 00:12:20.956 "nvme_io": false, 00:12:20.956 "nvme_io_md": false, 00:12:20.956 "write_zeroes": true, 00:12:20.956 "zcopy": true, 00:12:20.956 "get_zone_info": false, 00:12:20.956 "zone_management": false, 00:12:20.956 "zone_append": false, 00:12:20.956 "compare": false, 00:12:20.956 "compare_and_write": false, 00:12:20.956 "abort": true, 00:12:20.956 "seek_hole": false, 00:12:20.956 "seek_data": false, 00:12:20.956 "copy": true, 00:12:20.956 "nvme_iov_md": false 00:12:20.956 }, 00:12:20.956 "memory_domains": [ 00:12:20.956 { 00:12:20.956 "dma_device_id": "system", 00:12:20.956 "dma_device_type": 1 00:12:20.956 }, 00:12:20.956 { 00:12:20.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.956 "dma_device_type": 2 00:12:20.956 } 00:12:20.956 ], 00:12:20.956 "driver_specific": {} 00:12:20.956 } 00:12:20.956 ] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 [2024-11-08 16:53:50.425572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.956 [2024-11-08 16:53:50.425748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.956 [2024-11-08 16:53:50.425831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.956 [2024-11-08 16:53:50.428349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.956 [2024-11-08 16:53:50.428467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.956 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.216 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.216 "name": "Existed_Raid", 00:12:21.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.216 "strip_size_kb": 0, 00:12:21.216 "state": "configuring", 00:12:21.216 "raid_level": "raid1", 00:12:21.216 "superblock": false, 00:12:21.216 "num_base_bdevs": 4, 00:12:21.216 "num_base_bdevs_discovered": 3, 00:12:21.216 "num_base_bdevs_operational": 4, 00:12:21.216 "base_bdevs_list": [ 00:12:21.216 { 00:12:21.216 "name": "BaseBdev1", 00:12:21.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.216 "is_configured": false, 00:12:21.216 "data_offset": 0, 00:12:21.216 "data_size": 0 00:12:21.216 }, 00:12:21.216 { 00:12:21.216 "name": "BaseBdev2", 00:12:21.216 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:21.216 "is_configured": true, 00:12:21.216 "data_offset": 0, 00:12:21.216 "data_size": 65536 00:12:21.216 }, 00:12:21.216 { 00:12:21.216 "name": "BaseBdev3", 00:12:21.216 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:21.216 "is_configured": true, 00:12:21.216 "data_offset": 0, 00:12:21.216 "data_size": 65536 00:12:21.216 }, 00:12:21.216 { 00:12:21.216 "name": "BaseBdev4", 00:12:21.216 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:21.216 "is_configured": true, 00:12:21.216 "data_offset": 0, 00:12:21.216 "data_size": 65536 00:12:21.216 } 00:12:21.216 ] 00:12:21.216 }' 00:12:21.216 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.216 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.477 [2024-11-08 16:53:50.920736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.477 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.477 "name": "Existed_Raid", 00:12:21.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.477 "strip_size_kb": 0, 00:12:21.477 "state": "configuring", 00:12:21.477 "raid_level": "raid1", 00:12:21.477 "superblock": false, 00:12:21.477 "num_base_bdevs": 4, 00:12:21.477 "num_base_bdevs_discovered": 2, 00:12:21.477 "num_base_bdevs_operational": 4, 00:12:21.477 "base_bdevs_list": [ 00:12:21.477 { 00:12:21.477 "name": "BaseBdev1", 00:12:21.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.477 "is_configured": false, 00:12:21.477 "data_offset": 0, 00:12:21.477 "data_size": 0 00:12:21.477 }, 00:12:21.477 { 00:12:21.477 "name": null, 00:12:21.477 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:21.477 "is_configured": false, 00:12:21.477 "data_offset": 0, 00:12:21.477 "data_size": 65536 00:12:21.477 }, 00:12:21.477 { 00:12:21.477 "name": "BaseBdev3", 00:12:21.477 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:21.477 "is_configured": true, 00:12:21.477 "data_offset": 0, 00:12:21.477 "data_size": 65536 00:12:21.478 }, 00:12:21.478 { 00:12:21.478 "name": "BaseBdev4", 00:12:21.478 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:21.478 "is_configured": true, 00:12:21.478 "data_offset": 0, 00:12:21.478 "data_size": 65536 00:12:21.478 } 00:12:21.478 ] 00:12:21.478 }' 00:12:21.478 16:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.478 16:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.047 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.048 [2024-11-08 16:53:51.431674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.048 BaseBdev1 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.048 [ 00:12:22.048 { 00:12:22.048 "name": "BaseBdev1", 00:12:22.048 "aliases": [ 00:12:22.048 "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa" 00:12:22.048 ], 00:12:22.048 "product_name": "Malloc disk", 00:12:22.048 "block_size": 512, 00:12:22.048 "num_blocks": 65536, 00:12:22.048 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:22.048 "assigned_rate_limits": { 00:12:22.048 "rw_ios_per_sec": 0, 00:12:22.048 "rw_mbytes_per_sec": 0, 00:12:22.048 "r_mbytes_per_sec": 0, 00:12:22.048 "w_mbytes_per_sec": 0 00:12:22.048 }, 00:12:22.048 "claimed": true, 00:12:22.048 "claim_type": "exclusive_write", 00:12:22.048 "zoned": false, 00:12:22.048 "supported_io_types": { 00:12:22.048 "read": true, 00:12:22.048 "write": true, 00:12:22.048 "unmap": true, 00:12:22.048 "flush": true, 00:12:22.048 "reset": true, 00:12:22.048 "nvme_admin": false, 00:12:22.048 "nvme_io": false, 00:12:22.048 "nvme_io_md": false, 00:12:22.048 "write_zeroes": true, 00:12:22.048 "zcopy": true, 00:12:22.048 "get_zone_info": false, 00:12:22.048 "zone_management": false, 00:12:22.048 "zone_append": false, 00:12:22.048 "compare": false, 00:12:22.048 "compare_and_write": false, 00:12:22.048 "abort": true, 00:12:22.048 "seek_hole": false, 00:12:22.048 "seek_data": false, 00:12:22.048 "copy": true, 00:12:22.048 "nvme_iov_md": false 00:12:22.048 }, 00:12:22.048 "memory_domains": [ 00:12:22.048 { 00:12:22.048 "dma_device_id": "system", 00:12:22.048 "dma_device_type": 1 00:12:22.048 }, 00:12:22.048 { 00:12:22.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.048 "dma_device_type": 2 00:12:22.048 } 00:12:22.048 ], 00:12:22.048 "driver_specific": {} 00:12:22.048 } 00:12:22.048 ] 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.048 "name": "Existed_Raid", 00:12:22.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.048 "strip_size_kb": 0, 00:12:22.048 "state": "configuring", 00:12:22.048 "raid_level": "raid1", 00:12:22.048 "superblock": false, 00:12:22.048 "num_base_bdevs": 4, 00:12:22.048 "num_base_bdevs_discovered": 3, 00:12:22.048 "num_base_bdevs_operational": 4, 00:12:22.048 "base_bdevs_list": [ 00:12:22.048 { 00:12:22.048 "name": "BaseBdev1", 00:12:22.048 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:22.048 "is_configured": true, 00:12:22.048 "data_offset": 0, 00:12:22.048 "data_size": 65536 00:12:22.048 }, 00:12:22.048 { 00:12:22.048 "name": null, 00:12:22.048 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:22.048 "is_configured": false, 00:12:22.048 "data_offset": 0, 00:12:22.048 "data_size": 65536 00:12:22.048 }, 00:12:22.048 { 00:12:22.048 "name": "BaseBdev3", 00:12:22.048 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:22.048 "is_configured": true, 00:12:22.048 "data_offset": 0, 00:12:22.048 "data_size": 65536 00:12:22.048 }, 00:12:22.048 { 00:12:22.048 "name": "BaseBdev4", 00:12:22.048 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:22.048 "is_configured": true, 00:12:22.048 "data_offset": 0, 00:12:22.048 "data_size": 65536 00:12:22.048 } 00:12:22.048 ] 00:12:22.048 }' 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.048 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.616 [2024-11-08 16:53:51.967278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.616 16:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.616 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.616 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.616 "name": "Existed_Raid", 00:12:22.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.616 "strip_size_kb": 0, 00:12:22.616 "state": "configuring", 00:12:22.616 "raid_level": "raid1", 00:12:22.616 "superblock": false, 00:12:22.616 "num_base_bdevs": 4, 00:12:22.616 "num_base_bdevs_discovered": 2, 00:12:22.616 "num_base_bdevs_operational": 4, 00:12:22.616 "base_bdevs_list": [ 00:12:22.616 { 00:12:22.616 "name": "BaseBdev1", 00:12:22.616 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:22.616 "is_configured": true, 00:12:22.616 "data_offset": 0, 00:12:22.616 "data_size": 65536 00:12:22.616 }, 00:12:22.616 { 00:12:22.616 "name": null, 00:12:22.616 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:22.616 "is_configured": false, 00:12:22.616 "data_offset": 0, 00:12:22.616 "data_size": 65536 00:12:22.616 }, 00:12:22.616 { 00:12:22.616 "name": null, 00:12:22.616 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:22.616 "is_configured": false, 00:12:22.616 "data_offset": 0, 00:12:22.616 "data_size": 65536 00:12:22.616 }, 00:12:22.616 { 00:12:22.616 "name": "BaseBdev4", 00:12:22.616 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:22.616 "is_configured": true, 00:12:22.616 "data_offset": 0, 00:12:22.616 "data_size": 65536 00:12:22.616 } 00:12:22.616 ] 00:12:22.616 }' 00:12:22.616 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.616 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.186 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.187 [2024-11-08 16:53:52.539317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.187 "name": "Existed_Raid", 00:12:23.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.187 "strip_size_kb": 0, 00:12:23.187 "state": "configuring", 00:12:23.187 "raid_level": "raid1", 00:12:23.187 "superblock": false, 00:12:23.187 "num_base_bdevs": 4, 00:12:23.187 "num_base_bdevs_discovered": 3, 00:12:23.187 "num_base_bdevs_operational": 4, 00:12:23.187 "base_bdevs_list": [ 00:12:23.187 { 00:12:23.187 "name": "BaseBdev1", 00:12:23.187 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:23.187 "is_configured": true, 00:12:23.187 "data_offset": 0, 00:12:23.187 "data_size": 65536 00:12:23.187 }, 00:12:23.187 { 00:12:23.187 "name": null, 00:12:23.187 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:23.187 "is_configured": false, 00:12:23.187 "data_offset": 0, 00:12:23.187 "data_size": 65536 00:12:23.187 }, 00:12:23.187 { 00:12:23.187 "name": "BaseBdev3", 00:12:23.187 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:23.187 "is_configured": true, 00:12:23.187 "data_offset": 0, 00:12:23.187 "data_size": 65536 00:12:23.187 }, 00:12:23.187 { 00:12:23.187 "name": "BaseBdev4", 00:12:23.187 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:23.187 "is_configured": true, 00:12:23.187 "data_offset": 0, 00:12:23.187 "data_size": 65536 00:12:23.187 } 00:12:23.187 ] 00:12:23.187 }' 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.187 16:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.753 [2024-11-08 16:53:53.055397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.753 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.754 "name": "Existed_Raid", 00:12:23.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.754 "strip_size_kb": 0, 00:12:23.754 "state": "configuring", 00:12:23.754 "raid_level": "raid1", 00:12:23.754 "superblock": false, 00:12:23.754 "num_base_bdevs": 4, 00:12:23.754 "num_base_bdevs_discovered": 2, 00:12:23.754 "num_base_bdevs_operational": 4, 00:12:23.754 "base_bdevs_list": [ 00:12:23.754 { 00:12:23.754 "name": null, 00:12:23.754 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:23.754 "is_configured": false, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 }, 00:12:23.754 { 00:12:23.754 "name": null, 00:12:23.754 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:23.754 "is_configured": false, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 }, 00:12:23.754 { 00:12:23.754 "name": "BaseBdev3", 00:12:23.754 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:23.754 "is_configured": true, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 }, 00:12:23.754 { 00:12:23.754 "name": "BaseBdev4", 00:12:23.754 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:23.754 "is_configured": true, 00:12:23.754 "data_offset": 0, 00:12:23.754 "data_size": 65536 00:12:23.754 } 00:12:23.754 ] 00:12:23.754 }' 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.754 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.012 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 [2024-11-08 16:53:53.538010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.272 "name": "Existed_Raid", 00:12:24.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.272 "strip_size_kb": 0, 00:12:24.272 "state": "configuring", 00:12:24.272 "raid_level": "raid1", 00:12:24.272 "superblock": false, 00:12:24.272 "num_base_bdevs": 4, 00:12:24.272 "num_base_bdevs_discovered": 3, 00:12:24.272 "num_base_bdevs_operational": 4, 00:12:24.272 "base_bdevs_list": [ 00:12:24.272 { 00:12:24.272 "name": null, 00:12:24.272 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:24.272 "is_configured": false, 00:12:24.272 "data_offset": 0, 00:12:24.272 "data_size": 65536 00:12:24.272 }, 00:12:24.272 { 00:12:24.272 "name": "BaseBdev2", 00:12:24.272 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:24.272 "is_configured": true, 00:12:24.272 "data_offset": 0, 00:12:24.272 "data_size": 65536 00:12:24.272 }, 00:12:24.272 { 00:12:24.272 "name": "BaseBdev3", 00:12:24.272 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:24.272 "is_configured": true, 00:12:24.272 "data_offset": 0, 00:12:24.272 "data_size": 65536 00:12:24.272 }, 00:12:24.272 { 00:12:24.272 "name": "BaseBdev4", 00:12:24.272 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:24.272 "is_configured": true, 00:12:24.272 "data_offset": 0, 00:12:24.272 "data_size": 65536 00:12:24.272 } 00:12:24.272 ] 00:12:24.272 }' 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.272 16:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.533 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.533 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:24.533 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.533 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.533 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.533 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc77e012-6e8f-4adc-81dc-695c3d6b9eaa 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.793 [2024-11-08 16:53:54.120606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:24.793 [2024-11-08 16:53:54.120792] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:24.793 [2024-11-08 16:53:54.120830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:24.793 [2024-11-08 16:53:54.121154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:24.793 [2024-11-08 16:53:54.121353] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:24.793 [2024-11-08 16:53:54.121401] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:24.793 [2024-11-08 16:53:54.121691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.793 NewBaseBdev 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.793 [ 00:12:24.793 { 00:12:24.793 "name": "NewBaseBdev", 00:12:24.793 "aliases": [ 00:12:24.793 "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa" 00:12:24.793 ], 00:12:24.793 "product_name": "Malloc disk", 00:12:24.793 "block_size": 512, 00:12:24.793 "num_blocks": 65536, 00:12:24.793 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:24.793 "assigned_rate_limits": { 00:12:24.793 "rw_ios_per_sec": 0, 00:12:24.793 "rw_mbytes_per_sec": 0, 00:12:24.793 "r_mbytes_per_sec": 0, 00:12:24.793 "w_mbytes_per_sec": 0 00:12:24.793 }, 00:12:24.793 "claimed": true, 00:12:24.793 "claim_type": "exclusive_write", 00:12:24.793 "zoned": false, 00:12:24.793 "supported_io_types": { 00:12:24.793 "read": true, 00:12:24.793 "write": true, 00:12:24.793 "unmap": true, 00:12:24.793 "flush": true, 00:12:24.793 "reset": true, 00:12:24.793 "nvme_admin": false, 00:12:24.793 "nvme_io": false, 00:12:24.793 "nvme_io_md": false, 00:12:24.793 "write_zeroes": true, 00:12:24.793 "zcopy": true, 00:12:24.793 "get_zone_info": false, 00:12:24.793 "zone_management": false, 00:12:24.793 "zone_append": false, 00:12:24.793 "compare": false, 00:12:24.793 "compare_and_write": false, 00:12:24.793 "abort": true, 00:12:24.793 "seek_hole": false, 00:12:24.793 "seek_data": false, 00:12:24.793 "copy": true, 00:12:24.793 "nvme_iov_md": false 00:12:24.793 }, 00:12:24.793 "memory_domains": [ 00:12:24.793 { 00:12:24.793 "dma_device_id": "system", 00:12:24.793 "dma_device_type": 1 00:12:24.793 }, 00:12:24.793 { 00:12:24.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.793 "dma_device_type": 2 00:12:24.793 } 00:12:24.793 ], 00:12:24.793 "driver_specific": {} 00:12:24.793 } 00:12:24.793 ] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.793 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.793 "name": "Existed_Raid", 00:12:24.793 "uuid": "66642626-3f81-400d-9550-7aa310d2203e", 00:12:24.793 "strip_size_kb": 0, 00:12:24.793 "state": "online", 00:12:24.793 "raid_level": "raid1", 00:12:24.793 "superblock": false, 00:12:24.793 "num_base_bdevs": 4, 00:12:24.793 "num_base_bdevs_discovered": 4, 00:12:24.793 "num_base_bdevs_operational": 4, 00:12:24.794 "base_bdevs_list": [ 00:12:24.794 { 00:12:24.794 "name": "NewBaseBdev", 00:12:24.794 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:24.794 "is_configured": true, 00:12:24.794 "data_offset": 0, 00:12:24.794 "data_size": 65536 00:12:24.794 }, 00:12:24.794 { 00:12:24.794 "name": "BaseBdev2", 00:12:24.794 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:24.794 "is_configured": true, 00:12:24.794 "data_offset": 0, 00:12:24.794 "data_size": 65536 00:12:24.794 }, 00:12:24.794 { 00:12:24.794 "name": "BaseBdev3", 00:12:24.794 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:24.794 "is_configured": true, 00:12:24.794 "data_offset": 0, 00:12:24.794 "data_size": 65536 00:12:24.794 }, 00:12:24.794 { 00:12:24.794 "name": "BaseBdev4", 00:12:24.794 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:24.794 "is_configured": true, 00:12:24.794 "data_offset": 0, 00:12:24.794 "data_size": 65536 00:12:24.794 } 00:12:24.794 ] 00:12:24.794 }' 00:12:24.794 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.794 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.361 [2024-11-08 16:53:54.636174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.361 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.361 "name": "Existed_Raid", 00:12:25.361 "aliases": [ 00:12:25.361 "66642626-3f81-400d-9550-7aa310d2203e" 00:12:25.361 ], 00:12:25.361 "product_name": "Raid Volume", 00:12:25.361 "block_size": 512, 00:12:25.361 "num_blocks": 65536, 00:12:25.361 "uuid": "66642626-3f81-400d-9550-7aa310d2203e", 00:12:25.361 "assigned_rate_limits": { 00:12:25.361 "rw_ios_per_sec": 0, 00:12:25.361 "rw_mbytes_per_sec": 0, 00:12:25.361 "r_mbytes_per_sec": 0, 00:12:25.361 "w_mbytes_per_sec": 0 00:12:25.361 }, 00:12:25.361 "claimed": false, 00:12:25.361 "zoned": false, 00:12:25.361 "supported_io_types": { 00:12:25.361 "read": true, 00:12:25.361 "write": true, 00:12:25.361 "unmap": false, 00:12:25.361 "flush": false, 00:12:25.361 "reset": true, 00:12:25.361 "nvme_admin": false, 00:12:25.361 "nvme_io": false, 00:12:25.361 "nvme_io_md": false, 00:12:25.361 "write_zeroes": true, 00:12:25.361 "zcopy": false, 00:12:25.361 "get_zone_info": false, 00:12:25.361 "zone_management": false, 00:12:25.361 "zone_append": false, 00:12:25.361 "compare": false, 00:12:25.361 "compare_and_write": false, 00:12:25.361 "abort": false, 00:12:25.361 "seek_hole": false, 00:12:25.361 "seek_data": false, 00:12:25.361 "copy": false, 00:12:25.361 "nvme_iov_md": false 00:12:25.362 }, 00:12:25.362 "memory_domains": [ 00:12:25.362 { 00:12:25.362 "dma_device_id": "system", 00:12:25.362 "dma_device_type": 1 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.362 "dma_device_type": 2 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "system", 00:12:25.362 "dma_device_type": 1 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.362 "dma_device_type": 2 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "system", 00:12:25.362 "dma_device_type": 1 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.362 "dma_device_type": 2 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "system", 00:12:25.362 "dma_device_type": 1 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.362 "dma_device_type": 2 00:12:25.362 } 00:12:25.362 ], 00:12:25.362 "driver_specific": { 00:12:25.362 "raid": { 00:12:25.362 "uuid": "66642626-3f81-400d-9550-7aa310d2203e", 00:12:25.362 "strip_size_kb": 0, 00:12:25.362 "state": "online", 00:12:25.362 "raid_level": "raid1", 00:12:25.362 "superblock": false, 00:12:25.362 "num_base_bdevs": 4, 00:12:25.362 "num_base_bdevs_discovered": 4, 00:12:25.362 "num_base_bdevs_operational": 4, 00:12:25.362 "base_bdevs_list": [ 00:12:25.362 { 00:12:25.362 "name": "NewBaseBdev", 00:12:25.362 "uuid": "bc77e012-6e8f-4adc-81dc-695c3d6b9eaa", 00:12:25.362 "is_configured": true, 00:12:25.362 "data_offset": 0, 00:12:25.362 "data_size": 65536 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "name": "BaseBdev2", 00:12:25.362 "uuid": "53f260c1-007e-4415-9a56-07f3fcab1d70", 00:12:25.362 "is_configured": true, 00:12:25.362 "data_offset": 0, 00:12:25.362 "data_size": 65536 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "name": "BaseBdev3", 00:12:25.362 "uuid": "8d416099-3572-47a6-a47d-3ad66815e1bb", 00:12:25.362 "is_configured": true, 00:12:25.362 "data_offset": 0, 00:12:25.362 "data_size": 65536 00:12:25.362 }, 00:12:25.362 { 00:12:25.362 "name": "BaseBdev4", 00:12:25.362 "uuid": "a1cd67a1-3e67-4876-a0b3-c598c8f1d0a6", 00:12:25.362 "is_configured": true, 00:12:25.362 "data_offset": 0, 00:12:25.362 "data_size": 65536 00:12:25.362 } 00:12:25.362 ] 00:12:25.362 } 00:12:25.362 } 00:12:25.362 }' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:25.362 BaseBdev2 00:12:25.362 BaseBdev3 00:12:25.362 BaseBdev4' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.362 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.622 [2024-11-08 16:53:54.955250] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:25.622 [2024-11-08 16:53:54.955343] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.622 [2024-11-08 16:53:54.955490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.622 [2024-11-08 16:53:54.955843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.622 [2024-11-08 16:53:54.955918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83989 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83989 ']' 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83989 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.622 16:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83989 00:12:25.622 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.622 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.622 killing process with pid 83989 00:12:25.622 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83989' 00:12:25.622 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83989 00:12:25.622 [2024-11-08 16:53:55.002316] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.622 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83989 00:12:25.622 [2024-11-08 16:53:55.045946] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.881 ************************************ 00:12:25.881 END TEST raid_state_function_test 00:12:25.881 ************************************ 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:25.881 00:12:25.881 real 0m10.177s 00:12:25.881 user 0m17.442s 00:12:25.881 sys 0m2.019s 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.881 16:53:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:25.881 16:53:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:25.881 16:53:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.881 16:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.881 ************************************ 00:12:25.881 START TEST raid_state_function_test_sb 00:12:25.881 ************************************ 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:25.881 Process raid pid: 84644 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84644 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84644' 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84644 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84644 ']' 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.881 16:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.139 [2024-11-08 16:53:55.473262] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:26.139 [2024-11-08 16:53:55.473399] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.139 [2024-11-08 16:53:55.640364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.398 [2024-11-08 16:53:55.693425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.398 [2024-11-08 16:53:55.738961] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.398 [2024-11-08 16:53:55.739088] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.966 [2024-11-08 16:53:56.386116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.966 [2024-11-08 16:53:56.386180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.966 [2024-11-08 16:53:56.386194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:26.966 [2024-11-08 16:53:56.386206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:26.966 [2024-11-08 16:53:56.386216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:26.966 [2024-11-08 16:53:56.386232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:26.966 [2024-11-08 16:53:56.386240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:26.966 [2024-11-08 16:53:56.386250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.966 "name": "Existed_Raid", 00:12:26.966 "uuid": "22f943f1-67c5-4538-81aa-140cd431238e", 00:12:26.966 "strip_size_kb": 0, 00:12:26.966 "state": "configuring", 00:12:26.966 "raid_level": "raid1", 00:12:26.966 "superblock": true, 00:12:26.966 "num_base_bdevs": 4, 00:12:26.966 "num_base_bdevs_discovered": 0, 00:12:26.966 "num_base_bdevs_operational": 4, 00:12:26.966 "base_bdevs_list": [ 00:12:26.966 { 00:12:26.966 "name": "BaseBdev1", 00:12:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.966 "is_configured": false, 00:12:26.966 "data_offset": 0, 00:12:26.966 "data_size": 0 00:12:26.966 }, 00:12:26.966 { 00:12:26.966 "name": "BaseBdev2", 00:12:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.966 "is_configured": false, 00:12:26.966 "data_offset": 0, 00:12:26.966 "data_size": 0 00:12:26.966 }, 00:12:26.966 { 00:12:26.966 "name": "BaseBdev3", 00:12:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.966 "is_configured": false, 00:12:26.966 "data_offset": 0, 00:12:26.966 "data_size": 0 00:12:26.966 }, 00:12:26.966 { 00:12:26.966 "name": "BaseBdev4", 00:12:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.966 "is_configured": false, 00:12:26.966 "data_offset": 0, 00:12:26.966 "data_size": 0 00:12:26.966 } 00:12:26.966 ] 00:12:26.966 }' 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.966 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.534 [2024-11-08 16:53:56.897143] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:27.534 [2024-11-08 16:53:56.897202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.534 [2024-11-08 16:53:56.905197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.534 [2024-11-08 16:53:56.905298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.534 [2024-11-08 16:53:56.905334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.534 [2024-11-08 16:53:56.905373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.534 [2024-11-08 16:53:56.905403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:27.534 [2024-11-08 16:53:56.905435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:27.534 [2024-11-08 16:53:56.905464] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:27.534 [2024-11-08 16:53:56.905496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.534 [2024-11-08 16:53:56.923324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.534 BaseBdev1 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.534 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.534 [ 00:12:27.534 { 00:12:27.534 "name": "BaseBdev1", 00:12:27.534 "aliases": [ 00:12:27.535 "6acca11d-1d25-4924-94fb-ed2b04bfc3ab" 00:12:27.535 ], 00:12:27.535 "product_name": "Malloc disk", 00:12:27.535 "block_size": 512, 00:12:27.535 "num_blocks": 65536, 00:12:27.535 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:27.535 "assigned_rate_limits": { 00:12:27.535 "rw_ios_per_sec": 0, 00:12:27.535 "rw_mbytes_per_sec": 0, 00:12:27.535 "r_mbytes_per_sec": 0, 00:12:27.535 "w_mbytes_per_sec": 0 00:12:27.535 }, 00:12:27.535 "claimed": true, 00:12:27.535 "claim_type": "exclusive_write", 00:12:27.535 "zoned": false, 00:12:27.535 "supported_io_types": { 00:12:27.535 "read": true, 00:12:27.535 "write": true, 00:12:27.535 "unmap": true, 00:12:27.535 "flush": true, 00:12:27.535 "reset": true, 00:12:27.535 "nvme_admin": false, 00:12:27.535 "nvme_io": false, 00:12:27.535 "nvme_io_md": false, 00:12:27.535 "write_zeroes": true, 00:12:27.535 "zcopy": true, 00:12:27.535 "get_zone_info": false, 00:12:27.535 "zone_management": false, 00:12:27.535 "zone_append": false, 00:12:27.535 "compare": false, 00:12:27.535 "compare_and_write": false, 00:12:27.535 "abort": true, 00:12:27.535 "seek_hole": false, 00:12:27.535 "seek_data": false, 00:12:27.535 "copy": true, 00:12:27.535 "nvme_iov_md": false 00:12:27.535 }, 00:12:27.535 "memory_domains": [ 00:12:27.535 { 00:12:27.535 "dma_device_id": "system", 00:12:27.535 "dma_device_type": 1 00:12:27.535 }, 00:12:27.535 { 00:12:27.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.535 "dma_device_type": 2 00:12:27.535 } 00:12:27.535 ], 00:12:27.535 "driver_specific": {} 00:12:27.535 } 00:12:27.535 ] 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.535 16:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.535 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.535 "name": "Existed_Raid", 00:12:27.535 "uuid": "3842cf20-8187-40cc-9b86-00c0c0907244", 00:12:27.535 "strip_size_kb": 0, 00:12:27.535 "state": "configuring", 00:12:27.535 "raid_level": "raid1", 00:12:27.535 "superblock": true, 00:12:27.535 "num_base_bdevs": 4, 00:12:27.535 "num_base_bdevs_discovered": 1, 00:12:27.535 "num_base_bdevs_operational": 4, 00:12:27.535 "base_bdevs_list": [ 00:12:27.535 { 00:12:27.535 "name": "BaseBdev1", 00:12:27.535 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:27.535 "is_configured": true, 00:12:27.535 "data_offset": 2048, 00:12:27.535 "data_size": 63488 00:12:27.535 }, 00:12:27.535 { 00:12:27.535 "name": "BaseBdev2", 00:12:27.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.535 "is_configured": false, 00:12:27.535 "data_offset": 0, 00:12:27.535 "data_size": 0 00:12:27.535 }, 00:12:27.535 { 00:12:27.535 "name": "BaseBdev3", 00:12:27.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.535 "is_configured": false, 00:12:27.535 "data_offset": 0, 00:12:27.535 "data_size": 0 00:12:27.535 }, 00:12:27.535 { 00:12:27.535 "name": "BaseBdev4", 00:12:27.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.535 "is_configured": false, 00:12:27.535 "data_offset": 0, 00:12:27.535 "data_size": 0 00:12:27.535 } 00:12:27.535 ] 00:12:27.535 }' 00:12:27.535 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.535 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.104 [2024-11-08 16:53:57.435149] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.104 [2024-11-08 16:53:57.435230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.104 [2024-11-08 16:53:57.447213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.104 [2024-11-08 16:53:57.449583] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.104 [2024-11-08 16:53:57.449700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.104 [2024-11-08 16:53:57.449744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.104 [2024-11-08 16:53:57.449774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.104 [2024-11-08 16:53:57.449841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.104 [2024-11-08 16:53:57.449869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.104 "name": "Existed_Raid", 00:12:28.104 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:28.104 "strip_size_kb": 0, 00:12:28.104 "state": "configuring", 00:12:28.104 "raid_level": "raid1", 00:12:28.104 "superblock": true, 00:12:28.104 "num_base_bdevs": 4, 00:12:28.104 "num_base_bdevs_discovered": 1, 00:12:28.104 "num_base_bdevs_operational": 4, 00:12:28.104 "base_bdevs_list": [ 00:12:28.104 { 00:12:28.104 "name": "BaseBdev1", 00:12:28.104 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:28.104 "is_configured": true, 00:12:28.104 "data_offset": 2048, 00:12:28.104 "data_size": 63488 00:12:28.104 }, 00:12:28.104 { 00:12:28.104 "name": "BaseBdev2", 00:12:28.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.104 "is_configured": false, 00:12:28.104 "data_offset": 0, 00:12:28.104 "data_size": 0 00:12:28.104 }, 00:12:28.104 { 00:12:28.104 "name": "BaseBdev3", 00:12:28.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.104 "is_configured": false, 00:12:28.104 "data_offset": 0, 00:12:28.104 "data_size": 0 00:12:28.104 }, 00:12:28.104 { 00:12:28.104 "name": "BaseBdev4", 00:12:28.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.104 "is_configured": false, 00:12:28.104 "data_offset": 0, 00:12:28.104 "data_size": 0 00:12:28.104 } 00:12:28.104 ] 00:12:28.104 }' 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.104 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.673 [2024-11-08 16:53:57.946667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:28.673 BaseBdev2 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.673 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.673 [ 00:12:28.673 { 00:12:28.673 "name": "BaseBdev2", 00:12:28.673 "aliases": [ 00:12:28.673 "3da49c9f-383d-44ec-93a5-0c3c201f8f33" 00:12:28.673 ], 00:12:28.674 "product_name": "Malloc disk", 00:12:28.674 "block_size": 512, 00:12:28.674 "num_blocks": 65536, 00:12:28.674 "uuid": "3da49c9f-383d-44ec-93a5-0c3c201f8f33", 00:12:28.674 "assigned_rate_limits": { 00:12:28.674 "rw_ios_per_sec": 0, 00:12:28.674 "rw_mbytes_per_sec": 0, 00:12:28.674 "r_mbytes_per_sec": 0, 00:12:28.674 "w_mbytes_per_sec": 0 00:12:28.674 }, 00:12:28.674 "claimed": true, 00:12:28.674 "claim_type": "exclusive_write", 00:12:28.674 "zoned": false, 00:12:28.674 "supported_io_types": { 00:12:28.674 "read": true, 00:12:28.674 "write": true, 00:12:28.674 "unmap": true, 00:12:28.674 "flush": true, 00:12:28.674 "reset": true, 00:12:28.674 "nvme_admin": false, 00:12:28.674 "nvme_io": false, 00:12:28.674 "nvme_io_md": false, 00:12:28.674 "write_zeroes": true, 00:12:28.674 "zcopy": true, 00:12:28.674 "get_zone_info": false, 00:12:28.674 "zone_management": false, 00:12:28.674 "zone_append": false, 00:12:28.674 "compare": false, 00:12:28.674 "compare_and_write": false, 00:12:28.674 "abort": true, 00:12:28.674 "seek_hole": false, 00:12:28.674 "seek_data": false, 00:12:28.674 "copy": true, 00:12:28.674 "nvme_iov_md": false 00:12:28.674 }, 00:12:28.674 "memory_domains": [ 00:12:28.674 { 00:12:28.674 "dma_device_id": "system", 00:12:28.674 "dma_device_type": 1 00:12:28.674 }, 00:12:28.674 { 00:12:28.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.674 "dma_device_type": 2 00:12:28.674 } 00:12:28.674 ], 00:12:28.674 "driver_specific": {} 00:12:28.674 } 00:12:28.674 ] 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.674 16:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.674 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.674 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.674 "name": "Existed_Raid", 00:12:28.674 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:28.674 "strip_size_kb": 0, 00:12:28.674 "state": "configuring", 00:12:28.674 "raid_level": "raid1", 00:12:28.674 "superblock": true, 00:12:28.674 "num_base_bdevs": 4, 00:12:28.674 "num_base_bdevs_discovered": 2, 00:12:28.674 "num_base_bdevs_operational": 4, 00:12:28.674 "base_bdevs_list": [ 00:12:28.674 { 00:12:28.674 "name": "BaseBdev1", 00:12:28.674 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:28.674 "is_configured": true, 00:12:28.674 "data_offset": 2048, 00:12:28.674 "data_size": 63488 00:12:28.674 }, 00:12:28.674 { 00:12:28.674 "name": "BaseBdev2", 00:12:28.674 "uuid": "3da49c9f-383d-44ec-93a5-0c3c201f8f33", 00:12:28.674 "is_configured": true, 00:12:28.674 "data_offset": 2048, 00:12:28.674 "data_size": 63488 00:12:28.674 }, 00:12:28.674 { 00:12:28.674 "name": "BaseBdev3", 00:12:28.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.674 "is_configured": false, 00:12:28.674 "data_offset": 0, 00:12:28.674 "data_size": 0 00:12:28.674 }, 00:12:28.674 { 00:12:28.674 "name": "BaseBdev4", 00:12:28.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.674 "is_configured": false, 00:12:28.674 "data_offset": 0, 00:12:28.674 "data_size": 0 00:12:28.674 } 00:12:28.674 ] 00:12:28.674 }' 00:12:28.674 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.674 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.994 [2024-11-08 16:53:58.457360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.994 BaseBdev3 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.994 [ 00:12:28.994 { 00:12:28.994 "name": "BaseBdev3", 00:12:28.994 "aliases": [ 00:12:28.994 "f6b982b1-552f-4a23-99d4-811a80a4c7cb" 00:12:28.994 ], 00:12:28.994 "product_name": "Malloc disk", 00:12:28.994 "block_size": 512, 00:12:28.994 "num_blocks": 65536, 00:12:28.994 "uuid": "f6b982b1-552f-4a23-99d4-811a80a4c7cb", 00:12:28.994 "assigned_rate_limits": { 00:12:28.994 "rw_ios_per_sec": 0, 00:12:28.994 "rw_mbytes_per_sec": 0, 00:12:28.994 "r_mbytes_per_sec": 0, 00:12:28.994 "w_mbytes_per_sec": 0 00:12:28.994 }, 00:12:28.994 "claimed": true, 00:12:28.994 "claim_type": "exclusive_write", 00:12:28.994 "zoned": false, 00:12:28.994 "supported_io_types": { 00:12:28.994 "read": true, 00:12:28.994 "write": true, 00:12:28.994 "unmap": true, 00:12:28.994 "flush": true, 00:12:28.994 "reset": true, 00:12:28.994 "nvme_admin": false, 00:12:28.994 "nvme_io": false, 00:12:28.994 "nvme_io_md": false, 00:12:28.994 "write_zeroes": true, 00:12:28.994 "zcopy": true, 00:12:28.994 "get_zone_info": false, 00:12:28.994 "zone_management": false, 00:12:28.994 "zone_append": false, 00:12:28.994 "compare": false, 00:12:28.994 "compare_and_write": false, 00:12:28.994 "abort": true, 00:12:28.994 "seek_hole": false, 00:12:28.994 "seek_data": false, 00:12:28.994 "copy": true, 00:12:28.994 "nvme_iov_md": false 00:12:28.994 }, 00:12:28.994 "memory_domains": [ 00:12:28.994 { 00:12:28.994 "dma_device_id": "system", 00:12:28.994 "dma_device_type": 1 00:12:28.994 }, 00:12:28.994 { 00:12:28.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.994 "dma_device_type": 2 00:12:28.994 } 00:12:28.994 ], 00:12:28.994 "driver_specific": {} 00:12:28.994 } 00:12:28.994 ] 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.994 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.252 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.252 "name": "Existed_Raid", 00:12:29.252 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:29.252 "strip_size_kb": 0, 00:12:29.252 "state": "configuring", 00:12:29.252 "raid_level": "raid1", 00:12:29.252 "superblock": true, 00:12:29.252 "num_base_bdevs": 4, 00:12:29.252 "num_base_bdevs_discovered": 3, 00:12:29.252 "num_base_bdevs_operational": 4, 00:12:29.252 "base_bdevs_list": [ 00:12:29.252 { 00:12:29.252 "name": "BaseBdev1", 00:12:29.252 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:29.252 "is_configured": true, 00:12:29.252 "data_offset": 2048, 00:12:29.252 "data_size": 63488 00:12:29.252 }, 00:12:29.252 { 00:12:29.252 "name": "BaseBdev2", 00:12:29.252 "uuid": "3da49c9f-383d-44ec-93a5-0c3c201f8f33", 00:12:29.252 "is_configured": true, 00:12:29.252 "data_offset": 2048, 00:12:29.252 "data_size": 63488 00:12:29.252 }, 00:12:29.252 { 00:12:29.252 "name": "BaseBdev3", 00:12:29.252 "uuid": "f6b982b1-552f-4a23-99d4-811a80a4c7cb", 00:12:29.252 "is_configured": true, 00:12:29.252 "data_offset": 2048, 00:12:29.252 "data_size": 63488 00:12:29.252 }, 00:12:29.252 { 00:12:29.252 "name": "BaseBdev4", 00:12:29.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.252 "is_configured": false, 00:12:29.252 "data_offset": 0, 00:12:29.252 "data_size": 0 00:12:29.252 } 00:12:29.252 ] 00:12:29.252 }' 00:12:29.252 16:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.252 16:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.511 [2024-11-08 16:53:59.020097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:29.511 [2024-11-08 16:53:59.020361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:29.511 [2024-11-08 16:53:59.020393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.511 BaseBdev4 00:12:29.511 [2024-11-08 16:53:59.020767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:29.511 [2024-11-08 16:53:59.020954] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:29.511 [2024-11-08 16:53:59.020971] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:29.511 [2024-11-08 16:53:59.021117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:29.511 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.512 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 [ 00:12:29.771 { 00:12:29.771 "name": "BaseBdev4", 00:12:29.771 "aliases": [ 00:12:29.771 "1700924d-2d11-438e-89f8-51cff5d2c348" 00:12:29.771 ], 00:12:29.771 "product_name": "Malloc disk", 00:12:29.771 "block_size": 512, 00:12:29.771 "num_blocks": 65536, 00:12:29.771 "uuid": "1700924d-2d11-438e-89f8-51cff5d2c348", 00:12:29.771 "assigned_rate_limits": { 00:12:29.771 "rw_ios_per_sec": 0, 00:12:29.771 "rw_mbytes_per_sec": 0, 00:12:29.771 "r_mbytes_per_sec": 0, 00:12:29.771 "w_mbytes_per_sec": 0 00:12:29.771 }, 00:12:29.771 "claimed": true, 00:12:29.771 "claim_type": "exclusive_write", 00:12:29.771 "zoned": false, 00:12:29.771 "supported_io_types": { 00:12:29.771 "read": true, 00:12:29.771 "write": true, 00:12:29.771 "unmap": true, 00:12:29.771 "flush": true, 00:12:29.771 "reset": true, 00:12:29.771 "nvme_admin": false, 00:12:29.771 "nvme_io": false, 00:12:29.771 "nvme_io_md": false, 00:12:29.771 "write_zeroes": true, 00:12:29.771 "zcopy": true, 00:12:29.771 "get_zone_info": false, 00:12:29.771 "zone_management": false, 00:12:29.771 "zone_append": false, 00:12:29.771 "compare": false, 00:12:29.771 "compare_and_write": false, 00:12:29.771 "abort": true, 00:12:29.771 "seek_hole": false, 00:12:29.771 "seek_data": false, 00:12:29.771 "copy": true, 00:12:29.771 "nvme_iov_md": false 00:12:29.771 }, 00:12:29.771 "memory_domains": [ 00:12:29.771 { 00:12:29.771 "dma_device_id": "system", 00:12:29.771 "dma_device_type": 1 00:12:29.771 }, 00:12:29.771 { 00:12:29.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.771 "dma_device_type": 2 00:12:29.771 } 00:12:29.771 ], 00:12:29.771 "driver_specific": {} 00:12:29.771 } 00:12:29.771 ] 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.771 "name": "Existed_Raid", 00:12:29.771 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:29.771 "strip_size_kb": 0, 00:12:29.771 "state": "online", 00:12:29.771 "raid_level": "raid1", 00:12:29.771 "superblock": true, 00:12:29.771 "num_base_bdevs": 4, 00:12:29.771 "num_base_bdevs_discovered": 4, 00:12:29.771 "num_base_bdevs_operational": 4, 00:12:29.771 "base_bdevs_list": [ 00:12:29.771 { 00:12:29.771 "name": "BaseBdev1", 00:12:29.771 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:29.771 "is_configured": true, 00:12:29.771 "data_offset": 2048, 00:12:29.771 "data_size": 63488 00:12:29.771 }, 00:12:29.771 { 00:12:29.771 "name": "BaseBdev2", 00:12:29.771 "uuid": "3da49c9f-383d-44ec-93a5-0c3c201f8f33", 00:12:29.771 "is_configured": true, 00:12:29.771 "data_offset": 2048, 00:12:29.771 "data_size": 63488 00:12:29.771 }, 00:12:29.771 { 00:12:29.771 "name": "BaseBdev3", 00:12:29.771 "uuid": "f6b982b1-552f-4a23-99d4-811a80a4c7cb", 00:12:29.771 "is_configured": true, 00:12:29.771 "data_offset": 2048, 00:12:29.771 "data_size": 63488 00:12:29.771 }, 00:12:29.771 { 00:12:29.771 "name": "BaseBdev4", 00:12:29.771 "uuid": "1700924d-2d11-438e-89f8-51cff5d2c348", 00:12:29.771 "is_configured": true, 00:12:29.771 "data_offset": 2048, 00:12:29.771 "data_size": 63488 00:12:29.771 } 00:12:29.771 ] 00:12:29.771 }' 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.771 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 [2024-11-08 16:53:59.575731] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.338 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.338 "name": "Existed_Raid", 00:12:30.338 "aliases": [ 00:12:30.338 "b0996c2f-1cfe-43c7-a881-a30416beb82c" 00:12:30.338 ], 00:12:30.338 "product_name": "Raid Volume", 00:12:30.338 "block_size": 512, 00:12:30.338 "num_blocks": 63488, 00:12:30.338 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:30.338 "assigned_rate_limits": { 00:12:30.338 "rw_ios_per_sec": 0, 00:12:30.338 "rw_mbytes_per_sec": 0, 00:12:30.338 "r_mbytes_per_sec": 0, 00:12:30.338 "w_mbytes_per_sec": 0 00:12:30.338 }, 00:12:30.338 "claimed": false, 00:12:30.338 "zoned": false, 00:12:30.338 "supported_io_types": { 00:12:30.338 "read": true, 00:12:30.338 "write": true, 00:12:30.338 "unmap": false, 00:12:30.338 "flush": false, 00:12:30.338 "reset": true, 00:12:30.338 "nvme_admin": false, 00:12:30.338 "nvme_io": false, 00:12:30.338 "nvme_io_md": false, 00:12:30.338 "write_zeroes": true, 00:12:30.338 "zcopy": false, 00:12:30.338 "get_zone_info": false, 00:12:30.338 "zone_management": false, 00:12:30.338 "zone_append": false, 00:12:30.338 "compare": false, 00:12:30.338 "compare_and_write": false, 00:12:30.338 "abort": false, 00:12:30.338 "seek_hole": false, 00:12:30.338 "seek_data": false, 00:12:30.338 "copy": false, 00:12:30.338 "nvme_iov_md": false 00:12:30.338 }, 00:12:30.338 "memory_domains": [ 00:12:30.338 { 00:12:30.338 "dma_device_id": "system", 00:12:30.338 "dma_device_type": 1 00:12:30.338 }, 00:12:30.338 { 00:12:30.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.338 "dma_device_type": 2 00:12:30.338 }, 00:12:30.338 { 00:12:30.338 "dma_device_id": "system", 00:12:30.338 "dma_device_type": 1 00:12:30.338 }, 00:12:30.338 { 00:12:30.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.338 "dma_device_type": 2 00:12:30.338 }, 00:12:30.338 { 00:12:30.338 "dma_device_id": "system", 00:12:30.338 "dma_device_type": 1 00:12:30.338 }, 00:12:30.338 { 00:12:30.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.339 "dma_device_type": 2 00:12:30.339 }, 00:12:30.339 { 00:12:30.339 "dma_device_id": "system", 00:12:30.339 "dma_device_type": 1 00:12:30.339 }, 00:12:30.339 { 00:12:30.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.339 "dma_device_type": 2 00:12:30.339 } 00:12:30.339 ], 00:12:30.339 "driver_specific": { 00:12:30.339 "raid": { 00:12:30.339 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:30.339 "strip_size_kb": 0, 00:12:30.339 "state": "online", 00:12:30.339 "raid_level": "raid1", 00:12:30.339 "superblock": true, 00:12:30.339 "num_base_bdevs": 4, 00:12:30.339 "num_base_bdevs_discovered": 4, 00:12:30.339 "num_base_bdevs_operational": 4, 00:12:30.339 "base_bdevs_list": [ 00:12:30.339 { 00:12:30.339 "name": "BaseBdev1", 00:12:30.339 "uuid": "6acca11d-1d25-4924-94fb-ed2b04bfc3ab", 00:12:30.339 "is_configured": true, 00:12:30.339 "data_offset": 2048, 00:12:30.339 "data_size": 63488 00:12:30.339 }, 00:12:30.339 { 00:12:30.339 "name": "BaseBdev2", 00:12:30.339 "uuid": "3da49c9f-383d-44ec-93a5-0c3c201f8f33", 00:12:30.339 "is_configured": true, 00:12:30.339 "data_offset": 2048, 00:12:30.339 "data_size": 63488 00:12:30.339 }, 00:12:30.339 { 00:12:30.339 "name": "BaseBdev3", 00:12:30.339 "uuid": "f6b982b1-552f-4a23-99d4-811a80a4c7cb", 00:12:30.339 "is_configured": true, 00:12:30.339 "data_offset": 2048, 00:12:30.339 "data_size": 63488 00:12:30.339 }, 00:12:30.339 { 00:12:30.339 "name": "BaseBdev4", 00:12:30.339 "uuid": "1700924d-2d11-438e-89f8-51cff5d2c348", 00:12:30.339 "is_configured": true, 00:12:30.339 "data_offset": 2048, 00:12:30.339 "data_size": 63488 00:12:30.339 } 00:12:30.339 ] 00:12:30.339 } 00:12:30.339 } 00:12:30.339 }' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:30.339 BaseBdev2 00:12:30.339 BaseBdev3 00:12:30.339 BaseBdev4' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.339 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.339 [2024-11-08 16:53:59.855404] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.596 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.596 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.597 "name": "Existed_Raid", 00:12:30.597 "uuid": "b0996c2f-1cfe-43c7-a881-a30416beb82c", 00:12:30.597 "strip_size_kb": 0, 00:12:30.597 "state": "online", 00:12:30.597 "raid_level": "raid1", 00:12:30.597 "superblock": true, 00:12:30.597 "num_base_bdevs": 4, 00:12:30.597 "num_base_bdevs_discovered": 3, 00:12:30.597 "num_base_bdevs_operational": 3, 00:12:30.597 "base_bdevs_list": [ 00:12:30.597 { 00:12:30.597 "name": null, 00:12:30.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.597 "is_configured": false, 00:12:30.597 "data_offset": 0, 00:12:30.597 "data_size": 63488 00:12:30.597 }, 00:12:30.597 { 00:12:30.597 "name": "BaseBdev2", 00:12:30.597 "uuid": "3da49c9f-383d-44ec-93a5-0c3c201f8f33", 00:12:30.597 "is_configured": true, 00:12:30.597 "data_offset": 2048, 00:12:30.597 "data_size": 63488 00:12:30.597 }, 00:12:30.597 { 00:12:30.597 "name": "BaseBdev3", 00:12:30.597 "uuid": "f6b982b1-552f-4a23-99d4-811a80a4c7cb", 00:12:30.597 "is_configured": true, 00:12:30.597 "data_offset": 2048, 00:12:30.597 "data_size": 63488 00:12:30.597 }, 00:12:30.597 { 00:12:30.597 "name": "BaseBdev4", 00:12:30.597 "uuid": "1700924d-2d11-438e-89f8-51cff5d2c348", 00:12:30.597 "is_configured": true, 00:12:30.597 "data_offset": 2048, 00:12:30.597 "data_size": 63488 00:12:30.597 } 00:12:30.597 ] 00:12:30.597 }' 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.597 16:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.855 [2024-11-08 16:54:00.351925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.855 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.114 [2024-11-08 16:54:00.404421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.114 [2024-11-08 16:54:00.464769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:31.114 [2024-11-08 16:54:00.464909] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.114 [2024-11-08 16:54:00.478235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.114 [2024-11-08 16:54:00.478309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.114 [2024-11-08 16:54:00.478331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:31.114 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 BaseBdev2 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 [ 00:12:31.115 { 00:12:31.115 "name": "BaseBdev2", 00:12:31.115 "aliases": [ 00:12:31.115 "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7" 00:12:31.115 ], 00:12:31.115 "product_name": "Malloc disk", 00:12:31.115 "block_size": 512, 00:12:31.115 "num_blocks": 65536, 00:12:31.115 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:31.115 "assigned_rate_limits": { 00:12:31.115 "rw_ios_per_sec": 0, 00:12:31.115 "rw_mbytes_per_sec": 0, 00:12:31.115 "r_mbytes_per_sec": 0, 00:12:31.115 "w_mbytes_per_sec": 0 00:12:31.115 }, 00:12:31.115 "claimed": false, 00:12:31.115 "zoned": false, 00:12:31.115 "supported_io_types": { 00:12:31.115 "read": true, 00:12:31.115 "write": true, 00:12:31.115 "unmap": true, 00:12:31.115 "flush": true, 00:12:31.115 "reset": true, 00:12:31.115 "nvme_admin": false, 00:12:31.115 "nvme_io": false, 00:12:31.115 "nvme_io_md": false, 00:12:31.115 "write_zeroes": true, 00:12:31.115 "zcopy": true, 00:12:31.115 "get_zone_info": false, 00:12:31.115 "zone_management": false, 00:12:31.115 "zone_append": false, 00:12:31.115 "compare": false, 00:12:31.115 "compare_and_write": false, 00:12:31.115 "abort": true, 00:12:31.115 "seek_hole": false, 00:12:31.115 "seek_data": false, 00:12:31.115 "copy": true, 00:12:31.115 "nvme_iov_md": false 00:12:31.115 }, 00:12:31.115 "memory_domains": [ 00:12:31.115 { 00:12:31.115 "dma_device_id": "system", 00:12:31.115 "dma_device_type": 1 00:12:31.115 }, 00:12:31.115 { 00:12:31.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.115 "dma_device_type": 2 00:12:31.115 } 00:12:31.115 ], 00:12:31.115 "driver_specific": {} 00:12:31.115 } 00:12:31.115 ] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 BaseBdev3 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 [ 00:12:31.115 { 00:12:31.115 "name": "BaseBdev3", 00:12:31.115 "aliases": [ 00:12:31.115 "df050b81-9bf8-4edf-88df-8c3ead89f337" 00:12:31.115 ], 00:12:31.115 "product_name": "Malloc disk", 00:12:31.115 "block_size": 512, 00:12:31.115 "num_blocks": 65536, 00:12:31.115 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:31.115 "assigned_rate_limits": { 00:12:31.115 "rw_ios_per_sec": 0, 00:12:31.115 "rw_mbytes_per_sec": 0, 00:12:31.115 "r_mbytes_per_sec": 0, 00:12:31.115 "w_mbytes_per_sec": 0 00:12:31.115 }, 00:12:31.115 "claimed": false, 00:12:31.115 "zoned": false, 00:12:31.115 "supported_io_types": { 00:12:31.115 "read": true, 00:12:31.115 "write": true, 00:12:31.115 "unmap": true, 00:12:31.115 "flush": true, 00:12:31.115 "reset": true, 00:12:31.115 "nvme_admin": false, 00:12:31.115 "nvme_io": false, 00:12:31.115 "nvme_io_md": false, 00:12:31.115 "write_zeroes": true, 00:12:31.115 "zcopy": true, 00:12:31.115 "get_zone_info": false, 00:12:31.115 "zone_management": false, 00:12:31.115 "zone_append": false, 00:12:31.115 "compare": false, 00:12:31.115 "compare_and_write": false, 00:12:31.115 "abort": true, 00:12:31.115 "seek_hole": false, 00:12:31.115 "seek_data": false, 00:12:31.115 "copy": true, 00:12:31.115 "nvme_iov_md": false 00:12:31.115 }, 00:12:31.115 "memory_domains": [ 00:12:31.115 { 00:12:31.115 "dma_device_id": "system", 00:12:31.115 "dma_device_type": 1 00:12:31.115 }, 00:12:31.115 { 00:12:31.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.115 "dma_device_type": 2 00:12:31.115 } 00:12:31.115 ], 00:12:31.115 "driver_specific": {} 00:12:31.115 } 00:12:31.115 ] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 BaseBdev4 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.115 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.116 [ 00:12:31.116 { 00:12:31.116 "name": "BaseBdev4", 00:12:31.116 "aliases": [ 00:12:31.116 "6ddef10d-c343-442e-858d-291badd96c14" 00:12:31.116 ], 00:12:31.116 "product_name": "Malloc disk", 00:12:31.116 "block_size": 512, 00:12:31.116 "num_blocks": 65536, 00:12:31.116 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:31.116 "assigned_rate_limits": { 00:12:31.116 "rw_ios_per_sec": 0, 00:12:31.116 "rw_mbytes_per_sec": 0, 00:12:31.116 "r_mbytes_per_sec": 0, 00:12:31.116 "w_mbytes_per_sec": 0 00:12:31.116 }, 00:12:31.116 "claimed": false, 00:12:31.116 "zoned": false, 00:12:31.116 "supported_io_types": { 00:12:31.116 "read": true, 00:12:31.116 "write": true, 00:12:31.116 "unmap": true, 00:12:31.116 "flush": true, 00:12:31.116 "reset": true, 00:12:31.116 "nvme_admin": false, 00:12:31.116 "nvme_io": false, 00:12:31.116 "nvme_io_md": false, 00:12:31.116 "write_zeroes": true, 00:12:31.116 "zcopy": true, 00:12:31.116 "get_zone_info": false, 00:12:31.116 "zone_management": false, 00:12:31.116 "zone_append": false, 00:12:31.116 "compare": false, 00:12:31.116 "compare_and_write": false, 00:12:31.116 "abort": true, 00:12:31.116 "seek_hole": false, 00:12:31.116 "seek_data": false, 00:12:31.116 "copy": true, 00:12:31.116 "nvme_iov_md": false 00:12:31.116 }, 00:12:31.116 "memory_domains": [ 00:12:31.116 { 00:12:31.116 "dma_device_id": "system", 00:12:31.116 "dma_device_type": 1 00:12:31.116 }, 00:12:31.116 { 00:12:31.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.116 "dma_device_type": 2 00:12:31.116 } 00:12:31.116 ], 00:12:31.116 "driver_specific": {} 00:12:31.116 } 00:12:31.116 ] 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.116 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.374 [2024-11-08 16:54:00.640509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.374 [2024-11-08 16:54:00.640585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.374 [2024-11-08 16:54:00.640624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.374 [2024-11-08 16:54:00.643151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.374 [2024-11-08 16:54:00.643244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.374 "name": "Existed_Raid", 00:12:31.374 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:31.374 "strip_size_kb": 0, 00:12:31.374 "state": "configuring", 00:12:31.374 "raid_level": "raid1", 00:12:31.374 "superblock": true, 00:12:31.374 "num_base_bdevs": 4, 00:12:31.374 "num_base_bdevs_discovered": 3, 00:12:31.374 "num_base_bdevs_operational": 4, 00:12:31.374 "base_bdevs_list": [ 00:12:31.374 { 00:12:31.374 "name": "BaseBdev1", 00:12:31.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.374 "is_configured": false, 00:12:31.374 "data_offset": 0, 00:12:31.374 "data_size": 0 00:12:31.374 }, 00:12:31.374 { 00:12:31.374 "name": "BaseBdev2", 00:12:31.374 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:31.374 "is_configured": true, 00:12:31.374 "data_offset": 2048, 00:12:31.374 "data_size": 63488 00:12:31.374 }, 00:12:31.374 { 00:12:31.374 "name": "BaseBdev3", 00:12:31.374 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:31.374 "is_configured": true, 00:12:31.374 "data_offset": 2048, 00:12:31.374 "data_size": 63488 00:12:31.374 }, 00:12:31.374 { 00:12:31.374 "name": "BaseBdev4", 00:12:31.374 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:31.374 "is_configured": true, 00:12:31.374 "data_offset": 2048, 00:12:31.374 "data_size": 63488 00:12:31.374 } 00:12:31.374 ] 00:12:31.374 }' 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.374 16:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.633 [2024-11-08 16:54:01.023824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.633 "name": "Existed_Raid", 00:12:31.633 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:31.633 "strip_size_kb": 0, 00:12:31.633 "state": "configuring", 00:12:31.633 "raid_level": "raid1", 00:12:31.633 "superblock": true, 00:12:31.633 "num_base_bdevs": 4, 00:12:31.633 "num_base_bdevs_discovered": 2, 00:12:31.633 "num_base_bdevs_operational": 4, 00:12:31.633 "base_bdevs_list": [ 00:12:31.633 { 00:12:31.633 "name": "BaseBdev1", 00:12:31.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.633 "is_configured": false, 00:12:31.633 "data_offset": 0, 00:12:31.633 "data_size": 0 00:12:31.633 }, 00:12:31.633 { 00:12:31.633 "name": null, 00:12:31.633 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:31.633 "is_configured": false, 00:12:31.633 "data_offset": 0, 00:12:31.633 "data_size": 63488 00:12:31.633 }, 00:12:31.633 { 00:12:31.633 "name": "BaseBdev3", 00:12:31.633 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:31.633 "is_configured": true, 00:12:31.633 "data_offset": 2048, 00:12:31.633 "data_size": 63488 00:12:31.633 }, 00:12:31.633 { 00:12:31.633 "name": "BaseBdev4", 00:12:31.633 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:31.633 "is_configured": true, 00:12:31.633 "data_offset": 2048, 00:12:31.633 "data_size": 63488 00:12:31.633 } 00:12:31.633 ] 00:12:31.633 }' 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.633 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.199 [2024-11-08 16:54:01.497793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.199 BaseBdev1 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.199 [ 00:12:32.199 { 00:12:32.199 "name": "BaseBdev1", 00:12:32.199 "aliases": [ 00:12:32.199 "dc134e35-52ca-40a5-a15b-5f95f3d3399d" 00:12:32.199 ], 00:12:32.199 "product_name": "Malloc disk", 00:12:32.199 "block_size": 512, 00:12:32.199 "num_blocks": 65536, 00:12:32.199 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:32.199 "assigned_rate_limits": { 00:12:32.199 "rw_ios_per_sec": 0, 00:12:32.199 "rw_mbytes_per_sec": 0, 00:12:32.199 "r_mbytes_per_sec": 0, 00:12:32.199 "w_mbytes_per_sec": 0 00:12:32.199 }, 00:12:32.199 "claimed": true, 00:12:32.199 "claim_type": "exclusive_write", 00:12:32.199 "zoned": false, 00:12:32.199 "supported_io_types": { 00:12:32.199 "read": true, 00:12:32.199 "write": true, 00:12:32.199 "unmap": true, 00:12:32.199 "flush": true, 00:12:32.199 "reset": true, 00:12:32.199 "nvme_admin": false, 00:12:32.199 "nvme_io": false, 00:12:32.199 "nvme_io_md": false, 00:12:32.199 "write_zeroes": true, 00:12:32.199 "zcopy": true, 00:12:32.199 "get_zone_info": false, 00:12:32.199 "zone_management": false, 00:12:32.199 "zone_append": false, 00:12:32.199 "compare": false, 00:12:32.199 "compare_and_write": false, 00:12:32.199 "abort": true, 00:12:32.199 "seek_hole": false, 00:12:32.199 "seek_data": false, 00:12:32.199 "copy": true, 00:12:32.199 "nvme_iov_md": false 00:12:32.199 }, 00:12:32.199 "memory_domains": [ 00:12:32.199 { 00:12:32.199 "dma_device_id": "system", 00:12:32.199 "dma_device_type": 1 00:12:32.199 }, 00:12:32.199 { 00:12:32.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.199 "dma_device_type": 2 00:12:32.199 } 00:12:32.199 ], 00:12:32.199 "driver_specific": {} 00:12:32.199 } 00:12:32.199 ] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.199 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.200 "name": "Existed_Raid", 00:12:32.200 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:32.200 "strip_size_kb": 0, 00:12:32.200 "state": "configuring", 00:12:32.200 "raid_level": "raid1", 00:12:32.200 "superblock": true, 00:12:32.200 "num_base_bdevs": 4, 00:12:32.200 "num_base_bdevs_discovered": 3, 00:12:32.200 "num_base_bdevs_operational": 4, 00:12:32.200 "base_bdevs_list": [ 00:12:32.200 { 00:12:32.200 "name": "BaseBdev1", 00:12:32.200 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:32.200 "is_configured": true, 00:12:32.200 "data_offset": 2048, 00:12:32.200 "data_size": 63488 00:12:32.200 }, 00:12:32.200 { 00:12:32.200 "name": null, 00:12:32.200 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:32.200 "is_configured": false, 00:12:32.200 "data_offset": 0, 00:12:32.200 "data_size": 63488 00:12:32.200 }, 00:12:32.200 { 00:12:32.200 "name": "BaseBdev3", 00:12:32.200 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:32.200 "is_configured": true, 00:12:32.200 "data_offset": 2048, 00:12:32.200 "data_size": 63488 00:12:32.200 }, 00:12:32.200 { 00:12:32.200 "name": "BaseBdev4", 00:12:32.200 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:32.200 "is_configured": true, 00:12:32.200 "data_offset": 2048, 00:12:32.200 "data_size": 63488 00:12:32.200 } 00:12:32.200 ] 00:12:32.200 }' 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.200 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 [2024-11-08 16:54:01.965163] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.458 16:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.716 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.716 "name": "Existed_Raid", 00:12:32.716 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:32.716 "strip_size_kb": 0, 00:12:32.716 "state": "configuring", 00:12:32.716 "raid_level": "raid1", 00:12:32.716 "superblock": true, 00:12:32.716 "num_base_bdevs": 4, 00:12:32.716 "num_base_bdevs_discovered": 2, 00:12:32.716 "num_base_bdevs_operational": 4, 00:12:32.716 "base_bdevs_list": [ 00:12:32.716 { 00:12:32.716 "name": "BaseBdev1", 00:12:32.716 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:32.716 "is_configured": true, 00:12:32.716 "data_offset": 2048, 00:12:32.716 "data_size": 63488 00:12:32.716 }, 00:12:32.716 { 00:12:32.716 "name": null, 00:12:32.716 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:32.716 "is_configured": false, 00:12:32.716 "data_offset": 0, 00:12:32.716 "data_size": 63488 00:12:32.716 }, 00:12:32.716 { 00:12:32.716 "name": null, 00:12:32.716 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:32.716 "is_configured": false, 00:12:32.716 "data_offset": 0, 00:12:32.716 "data_size": 63488 00:12:32.716 }, 00:12:32.716 { 00:12:32.716 "name": "BaseBdev4", 00:12:32.716 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:32.716 "is_configured": true, 00:12:32.716 "data_offset": 2048, 00:12:32.716 "data_size": 63488 00:12:32.716 } 00:12:32.716 ] 00:12:32.716 }' 00:12:32.716 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.716 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 [2024-11-08 16:54:02.420921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.975 "name": "Existed_Raid", 00:12:32.975 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:32.975 "strip_size_kb": 0, 00:12:32.975 "state": "configuring", 00:12:32.975 "raid_level": "raid1", 00:12:32.975 "superblock": true, 00:12:32.975 "num_base_bdevs": 4, 00:12:32.975 "num_base_bdevs_discovered": 3, 00:12:32.975 "num_base_bdevs_operational": 4, 00:12:32.975 "base_bdevs_list": [ 00:12:32.975 { 00:12:32.975 "name": "BaseBdev1", 00:12:32.975 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:32.975 "is_configured": true, 00:12:32.975 "data_offset": 2048, 00:12:32.975 "data_size": 63488 00:12:32.975 }, 00:12:32.975 { 00:12:32.975 "name": null, 00:12:32.975 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:32.975 "is_configured": false, 00:12:32.975 "data_offset": 0, 00:12:32.975 "data_size": 63488 00:12:32.975 }, 00:12:32.975 { 00:12:32.975 "name": "BaseBdev3", 00:12:32.975 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:32.975 "is_configured": true, 00:12:32.975 "data_offset": 2048, 00:12:32.975 "data_size": 63488 00:12:32.975 }, 00:12:32.975 { 00:12:32.975 "name": "BaseBdev4", 00:12:32.975 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:32.975 "is_configured": true, 00:12:32.975 "data_offset": 2048, 00:12:32.976 "data_size": 63488 00:12:32.976 } 00:12:32.976 ] 00:12:32.976 }' 00:12:32.976 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.976 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.572 [2024-11-08 16:54:02.912312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.572 "name": "Existed_Raid", 00:12:33.572 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:33.572 "strip_size_kb": 0, 00:12:33.572 "state": "configuring", 00:12:33.572 "raid_level": "raid1", 00:12:33.572 "superblock": true, 00:12:33.572 "num_base_bdevs": 4, 00:12:33.572 "num_base_bdevs_discovered": 2, 00:12:33.572 "num_base_bdevs_operational": 4, 00:12:33.572 "base_bdevs_list": [ 00:12:33.572 { 00:12:33.572 "name": null, 00:12:33.572 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:33.572 "is_configured": false, 00:12:33.572 "data_offset": 0, 00:12:33.572 "data_size": 63488 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "name": null, 00:12:33.572 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:33.572 "is_configured": false, 00:12:33.572 "data_offset": 0, 00:12:33.572 "data_size": 63488 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "name": "BaseBdev3", 00:12:33.572 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:33.572 "is_configured": true, 00:12:33.572 "data_offset": 2048, 00:12:33.572 "data_size": 63488 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "name": "BaseBdev4", 00:12:33.572 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:33.572 "is_configured": true, 00:12:33.572 "data_offset": 2048, 00:12:33.572 "data_size": 63488 00:12:33.572 } 00:12:33.572 ] 00:12:33.572 }' 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.572 16:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.139 [2024-11-08 16:54:03.415147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.139 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.140 "name": "Existed_Raid", 00:12:34.140 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:34.140 "strip_size_kb": 0, 00:12:34.140 "state": "configuring", 00:12:34.140 "raid_level": "raid1", 00:12:34.140 "superblock": true, 00:12:34.140 "num_base_bdevs": 4, 00:12:34.140 "num_base_bdevs_discovered": 3, 00:12:34.140 "num_base_bdevs_operational": 4, 00:12:34.140 "base_bdevs_list": [ 00:12:34.140 { 00:12:34.140 "name": null, 00:12:34.140 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:34.140 "is_configured": false, 00:12:34.140 "data_offset": 0, 00:12:34.140 "data_size": 63488 00:12:34.140 }, 00:12:34.140 { 00:12:34.140 "name": "BaseBdev2", 00:12:34.140 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:34.140 "is_configured": true, 00:12:34.140 "data_offset": 2048, 00:12:34.140 "data_size": 63488 00:12:34.140 }, 00:12:34.140 { 00:12:34.140 "name": "BaseBdev3", 00:12:34.140 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:34.140 "is_configured": true, 00:12:34.140 "data_offset": 2048, 00:12:34.140 "data_size": 63488 00:12:34.140 }, 00:12:34.140 { 00:12:34.140 "name": "BaseBdev4", 00:12:34.140 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:34.140 "is_configured": true, 00:12:34.140 "data_offset": 2048, 00:12:34.140 "data_size": 63488 00:12:34.140 } 00:12:34.140 ] 00:12:34.140 }' 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.140 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:34.399 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dc134e35-52ca-40a5-a15b-5f95f3d3399d 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.658 [2024-11-08 16:54:03.965934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:34.658 [2024-11-08 16:54:03.966170] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:34.658 [2024-11-08 16:54:03.966198] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.658 NewBaseBdev 00:12:34.658 [2024-11-08 16:54:03.966498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:34.658 [2024-11-08 16:54:03.966675] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:34.658 [2024-11-08 16:54:03.966689] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:34.658 [2024-11-08 16:54:03.966809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.658 16:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.658 [ 00:12:34.658 { 00:12:34.658 "name": "NewBaseBdev", 00:12:34.658 "aliases": [ 00:12:34.658 "dc134e35-52ca-40a5-a15b-5f95f3d3399d" 00:12:34.658 ], 00:12:34.658 "product_name": "Malloc disk", 00:12:34.658 "block_size": 512, 00:12:34.658 "num_blocks": 65536, 00:12:34.658 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:34.658 "assigned_rate_limits": { 00:12:34.658 "rw_ios_per_sec": 0, 00:12:34.658 "rw_mbytes_per_sec": 0, 00:12:34.658 "r_mbytes_per_sec": 0, 00:12:34.658 "w_mbytes_per_sec": 0 00:12:34.658 }, 00:12:34.658 "claimed": true, 00:12:34.658 "claim_type": "exclusive_write", 00:12:34.658 "zoned": false, 00:12:34.658 "supported_io_types": { 00:12:34.658 "read": true, 00:12:34.658 "write": true, 00:12:34.658 "unmap": true, 00:12:34.658 "flush": true, 00:12:34.658 "reset": true, 00:12:34.658 "nvme_admin": false, 00:12:34.658 "nvme_io": false, 00:12:34.658 "nvme_io_md": false, 00:12:34.658 "write_zeroes": true, 00:12:34.658 "zcopy": true, 00:12:34.658 "get_zone_info": false, 00:12:34.658 "zone_management": false, 00:12:34.658 "zone_append": false, 00:12:34.658 "compare": false, 00:12:34.658 "compare_and_write": false, 00:12:34.658 "abort": true, 00:12:34.658 "seek_hole": false, 00:12:34.658 "seek_data": false, 00:12:34.658 "copy": true, 00:12:34.658 "nvme_iov_md": false 00:12:34.658 }, 00:12:34.658 "memory_domains": [ 00:12:34.658 { 00:12:34.658 "dma_device_id": "system", 00:12:34.658 "dma_device_type": 1 00:12:34.658 }, 00:12:34.658 { 00:12:34.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.658 "dma_device_type": 2 00:12:34.658 } 00:12:34.658 ], 00:12:34.658 "driver_specific": {} 00:12:34.658 } 00:12:34.658 ] 00:12:34.658 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.658 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:34.658 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.659 "name": "Existed_Raid", 00:12:34.659 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:34.659 "strip_size_kb": 0, 00:12:34.659 "state": "online", 00:12:34.659 "raid_level": "raid1", 00:12:34.659 "superblock": true, 00:12:34.659 "num_base_bdevs": 4, 00:12:34.659 "num_base_bdevs_discovered": 4, 00:12:34.659 "num_base_bdevs_operational": 4, 00:12:34.659 "base_bdevs_list": [ 00:12:34.659 { 00:12:34.659 "name": "NewBaseBdev", 00:12:34.659 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:34.659 "is_configured": true, 00:12:34.659 "data_offset": 2048, 00:12:34.659 "data_size": 63488 00:12:34.659 }, 00:12:34.659 { 00:12:34.659 "name": "BaseBdev2", 00:12:34.659 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:34.659 "is_configured": true, 00:12:34.659 "data_offset": 2048, 00:12:34.659 "data_size": 63488 00:12:34.659 }, 00:12:34.659 { 00:12:34.659 "name": "BaseBdev3", 00:12:34.659 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:34.659 "is_configured": true, 00:12:34.659 "data_offset": 2048, 00:12:34.659 "data_size": 63488 00:12:34.659 }, 00:12:34.659 { 00:12:34.659 "name": "BaseBdev4", 00:12:34.659 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:34.659 "is_configured": true, 00:12:34.659 "data_offset": 2048, 00:12:34.659 "data_size": 63488 00:12:34.659 } 00:12:34.659 ] 00:12:34.659 }' 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.659 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.918 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.918 [2024-11-08 16:54:04.437611] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.176 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.176 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:35.176 "name": "Existed_Raid", 00:12:35.176 "aliases": [ 00:12:35.176 "cd025fcf-93df-495e-b868-e6f3ad397014" 00:12:35.176 ], 00:12:35.176 "product_name": "Raid Volume", 00:12:35.176 "block_size": 512, 00:12:35.176 "num_blocks": 63488, 00:12:35.176 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:35.176 "assigned_rate_limits": { 00:12:35.176 "rw_ios_per_sec": 0, 00:12:35.176 "rw_mbytes_per_sec": 0, 00:12:35.176 "r_mbytes_per_sec": 0, 00:12:35.176 "w_mbytes_per_sec": 0 00:12:35.176 }, 00:12:35.176 "claimed": false, 00:12:35.176 "zoned": false, 00:12:35.176 "supported_io_types": { 00:12:35.176 "read": true, 00:12:35.176 "write": true, 00:12:35.176 "unmap": false, 00:12:35.176 "flush": false, 00:12:35.176 "reset": true, 00:12:35.176 "nvme_admin": false, 00:12:35.176 "nvme_io": false, 00:12:35.176 "nvme_io_md": false, 00:12:35.176 "write_zeroes": true, 00:12:35.176 "zcopy": false, 00:12:35.176 "get_zone_info": false, 00:12:35.176 "zone_management": false, 00:12:35.176 "zone_append": false, 00:12:35.176 "compare": false, 00:12:35.176 "compare_and_write": false, 00:12:35.176 "abort": false, 00:12:35.176 "seek_hole": false, 00:12:35.176 "seek_data": false, 00:12:35.176 "copy": false, 00:12:35.176 "nvme_iov_md": false 00:12:35.176 }, 00:12:35.176 "memory_domains": [ 00:12:35.176 { 00:12:35.176 "dma_device_id": "system", 00:12:35.176 "dma_device_type": 1 00:12:35.176 }, 00:12:35.176 { 00:12:35.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.177 "dma_device_type": 2 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "dma_device_id": "system", 00:12:35.177 "dma_device_type": 1 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.177 "dma_device_type": 2 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "dma_device_id": "system", 00:12:35.177 "dma_device_type": 1 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.177 "dma_device_type": 2 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "dma_device_id": "system", 00:12:35.177 "dma_device_type": 1 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.177 "dma_device_type": 2 00:12:35.177 } 00:12:35.177 ], 00:12:35.177 "driver_specific": { 00:12:35.177 "raid": { 00:12:35.177 "uuid": "cd025fcf-93df-495e-b868-e6f3ad397014", 00:12:35.177 "strip_size_kb": 0, 00:12:35.177 "state": "online", 00:12:35.177 "raid_level": "raid1", 00:12:35.177 "superblock": true, 00:12:35.177 "num_base_bdevs": 4, 00:12:35.177 "num_base_bdevs_discovered": 4, 00:12:35.177 "num_base_bdevs_operational": 4, 00:12:35.177 "base_bdevs_list": [ 00:12:35.177 { 00:12:35.177 "name": "NewBaseBdev", 00:12:35.177 "uuid": "dc134e35-52ca-40a5-a15b-5f95f3d3399d", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "name": "BaseBdev2", 00:12:35.177 "uuid": "7ecb9aca-dd7a-488b-98d8-5579cf44b2e7", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "name": "BaseBdev3", 00:12:35.177 "uuid": "df050b81-9bf8-4edf-88df-8c3ead89f337", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "name": "BaseBdev4", 00:12:35.177 "uuid": "6ddef10d-c343-442e-858d-291badd96c14", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 } 00:12:35.177 ] 00:12:35.177 } 00:12:35.177 } 00:12:35.177 }' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:35.177 BaseBdev2 00:12:35.177 BaseBdev3 00:12:35.177 BaseBdev4' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.177 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.436 [2024-11-08 16:54:04.776761] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.436 [2024-11-08 16:54:04.776801] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.436 [2024-11-08 16:54:04.776904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.436 [2024-11-08 16:54:04.777219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.436 [2024-11-08 16:54:04.777248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84644 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84644 ']' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84644 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84644 00:12:35.436 killing process with pid 84644 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84644' 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84644 00:12:35.436 [2024-11-08 16:54:04.823196] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.436 16:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84644 00:12:35.436 [2024-11-08 16:54:04.868697] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.693 16:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:35.693 00:12:35.693 real 0m9.757s 00:12:35.693 user 0m16.721s 00:12:35.693 sys 0m1.792s 00:12:35.693 16:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.693 16:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 ************************************ 00:12:35.693 END TEST raid_state_function_test_sb 00:12:35.693 ************************************ 00:12:35.693 16:54:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:35.693 16:54:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:35.693 16:54:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.693 16:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.693 ************************************ 00:12:35.693 START TEST raid_superblock_test 00:12:35.693 ************************************ 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85298 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85298 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85298 ']' 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:35.693 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.694 16:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.951 [2024-11-08 16:54:05.291367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:35.951 [2024-11-08 16:54:05.291523] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85298 ] 00:12:35.951 [2024-11-08 16:54:05.440936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.209 [2024-11-08 16:54:05.495996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.209 [2024-11-08 16:54:05.542743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.209 [2024-11-08 16:54:05.542800] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.776 malloc1 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.776 [2024-11-08 16:54:06.216547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:36.776 [2024-11-08 16:54:06.216666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.776 [2024-11-08 16:54:06.216697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:36.776 [2024-11-08 16:54:06.216716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.776 [2024-11-08 16:54:06.219329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.776 [2024-11-08 16:54:06.219381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:36.776 pt1 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.776 malloc2 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.776 [2024-11-08 16:54:06.255747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:36.776 [2024-11-08 16:54:06.255851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.776 [2024-11-08 16:54:06.255879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:36.776 [2024-11-08 16:54:06.255895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.776 [2024-11-08 16:54:06.259037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.776 [2024-11-08 16:54:06.259093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:36.776 pt2 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.776 malloc3 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.776 [2024-11-08 16:54:06.285739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:36.776 [2024-11-08 16:54:06.285828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.776 [2024-11-08 16:54:06.285854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:36.776 [2024-11-08 16:54:06.285868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.776 [2024-11-08 16:54:06.288542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.776 [2024-11-08 16:54:06.288604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:36.776 pt3 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.776 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.035 malloc4 00:12:37.035 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.035 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:37.035 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.035 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.036 [2024-11-08 16:54:06.315650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:37.036 [2024-11-08 16:54:06.315749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.036 [2024-11-08 16:54:06.315773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:37.036 [2024-11-08 16:54:06.315790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.036 [2024-11-08 16:54:06.318359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.036 [2024-11-08 16:54:06.318414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:37.036 pt4 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.036 [2024-11-08 16:54:06.327755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:37.036 [2024-11-08 16:54:06.330007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:37.036 [2024-11-08 16:54:06.330103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:37.036 [2024-11-08 16:54:06.330157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:37.036 [2024-11-08 16:54:06.330359] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:37.036 [2024-11-08 16:54:06.330393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.036 [2024-11-08 16:54:06.330794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:37.036 [2024-11-08 16:54:06.330999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:37.036 [2024-11-08 16:54:06.331020] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:37.036 [2024-11-08 16:54:06.331221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.036 "name": "raid_bdev1", 00:12:37.036 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:37.036 "strip_size_kb": 0, 00:12:37.036 "state": "online", 00:12:37.036 "raid_level": "raid1", 00:12:37.036 "superblock": true, 00:12:37.036 "num_base_bdevs": 4, 00:12:37.036 "num_base_bdevs_discovered": 4, 00:12:37.036 "num_base_bdevs_operational": 4, 00:12:37.036 "base_bdevs_list": [ 00:12:37.036 { 00:12:37.036 "name": "pt1", 00:12:37.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.036 "is_configured": true, 00:12:37.036 "data_offset": 2048, 00:12:37.036 "data_size": 63488 00:12:37.036 }, 00:12:37.036 { 00:12:37.036 "name": "pt2", 00:12:37.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.036 "is_configured": true, 00:12:37.036 "data_offset": 2048, 00:12:37.036 "data_size": 63488 00:12:37.036 }, 00:12:37.036 { 00:12:37.036 "name": "pt3", 00:12:37.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.036 "is_configured": true, 00:12:37.036 "data_offset": 2048, 00:12:37.036 "data_size": 63488 00:12:37.036 }, 00:12:37.036 { 00:12:37.036 "name": "pt4", 00:12:37.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.036 "is_configured": true, 00:12:37.036 "data_offset": 2048, 00:12:37.036 "data_size": 63488 00:12:37.036 } 00:12:37.036 ] 00:12:37.036 }' 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.036 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.295 [2024-11-08 16:54:06.767545] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.295 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.295 "name": "raid_bdev1", 00:12:37.295 "aliases": [ 00:12:37.295 "3fc1602d-c31f-47cf-a44e-d39f7bb66603" 00:12:37.295 ], 00:12:37.295 "product_name": "Raid Volume", 00:12:37.295 "block_size": 512, 00:12:37.295 "num_blocks": 63488, 00:12:37.295 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:37.295 "assigned_rate_limits": { 00:12:37.295 "rw_ios_per_sec": 0, 00:12:37.295 "rw_mbytes_per_sec": 0, 00:12:37.295 "r_mbytes_per_sec": 0, 00:12:37.295 "w_mbytes_per_sec": 0 00:12:37.295 }, 00:12:37.295 "claimed": false, 00:12:37.295 "zoned": false, 00:12:37.295 "supported_io_types": { 00:12:37.295 "read": true, 00:12:37.295 "write": true, 00:12:37.295 "unmap": false, 00:12:37.295 "flush": false, 00:12:37.295 "reset": true, 00:12:37.295 "nvme_admin": false, 00:12:37.295 "nvme_io": false, 00:12:37.295 "nvme_io_md": false, 00:12:37.295 "write_zeroes": true, 00:12:37.295 "zcopy": false, 00:12:37.295 "get_zone_info": false, 00:12:37.295 "zone_management": false, 00:12:37.295 "zone_append": false, 00:12:37.295 "compare": false, 00:12:37.295 "compare_and_write": false, 00:12:37.295 "abort": false, 00:12:37.295 "seek_hole": false, 00:12:37.295 "seek_data": false, 00:12:37.295 "copy": false, 00:12:37.295 "nvme_iov_md": false 00:12:37.295 }, 00:12:37.295 "memory_domains": [ 00:12:37.295 { 00:12:37.295 "dma_device_id": "system", 00:12:37.295 "dma_device_type": 1 00:12:37.295 }, 00:12:37.295 { 00:12:37.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.295 "dma_device_type": 2 00:12:37.295 }, 00:12:37.295 { 00:12:37.295 "dma_device_id": "system", 00:12:37.295 "dma_device_type": 1 00:12:37.295 }, 00:12:37.295 { 00:12:37.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.295 "dma_device_type": 2 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "dma_device_id": "system", 00:12:37.296 "dma_device_type": 1 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.296 "dma_device_type": 2 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "dma_device_id": "system", 00:12:37.296 "dma_device_type": 1 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.296 "dma_device_type": 2 00:12:37.296 } 00:12:37.296 ], 00:12:37.296 "driver_specific": { 00:12:37.296 "raid": { 00:12:37.296 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:37.296 "strip_size_kb": 0, 00:12:37.296 "state": "online", 00:12:37.296 "raid_level": "raid1", 00:12:37.296 "superblock": true, 00:12:37.296 "num_base_bdevs": 4, 00:12:37.296 "num_base_bdevs_discovered": 4, 00:12:37.296 "num_base_bdevs_operational": 4, 00:12:37.296 "base_bdevs_list": [ 00:12:37.296 { 00:12:37.296 "name": "pt1", 00:12:37.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.296 "is_configured": true, 00:12:37.296 "data_offset": 2048, 00:12:37.296 "data_size": 63488 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "name": "pt2", 00:12:37.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.296 "is_configured": true, 00:12:37.296 "data_offset": 2048, 00:12:37.296 "data_size": 63488 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "name": "pt3", 00:12:37.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.296 "is_configured": true, 00:12:37.296 "data_offset": 2048, 00:12:37.296 "data_size": 63488 00:12:37.296 }, 00:12:37.296 { 00:12:37.296 "name": "pt4", 00:12:37.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.296 "is_configured": true, 00:12:37.296 "data_offset": 2048, 00:12:37.296 "data_size": 63488 00:12:37.296 } 00:12:37.296 ] 00:12:37.296 } 00:12:37.296 } 00:12:37.296 }' 00:12:37.296 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:37.555 pt2 00:12:37.555 pt3 00:12:37.555 pt4' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.555 16:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.555 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:37.555 [2024-11-08 16:54:07.071033] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3fc1602d-c31f-47cf-a44e-d39f7bb66603 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3fc1602d-c31f-47cf-a44e-d39f7bb66603 ']' 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.815 [2024-11-08 16:54:07.114544] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.815 [2024-11-08 16:54:07.114587] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.815 [2024-11-08 16:54:07.114706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.815 [2024-11-08 16:54:07.114814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.815 [2024-11-08 16:54:07.114845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.815 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.816 [2024-11-08 16:54:07.266370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:37.816 [2024-11-08 16:54:07.268668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:37.816 [2024-11-08 16:54:07.268738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:37.816 [2024-11-08 16:54:07.268776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:37.816 [2024-11-08 16:54:07.268839] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:37.816 [2024-11-08 16:54:07.268903] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:37.816 [2024-11-08 16:54:07.268932] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:37.816 [2024-11-08 16:54:07.268951] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:37.816 [2024-11-08 16:54:07.268969] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.816 [2024-11-08 16:54:07.268987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:12:37.816 request: 00:12:37.816 { 00:12:37.816 "name": "raid_bdev1", 00:12:37.816 "raid_level": "raid1", 00:12:37.816 "base_bdevs": [ 00:12:37.816 "malloc1", 00:12:37.816 "malloc2", 00:12:37.816 "malloc3", 00:12:37.816 "malloc4" 00:12:37.816 ], 00:12:37.816 "superblock": false, 00:12:37.816 "method": "bdev_raid_create", 00:12:37.816 "req_id": 1 00:12:37.816 } 00:12:37.816 Got JSON-RPC error response 00:12:37.816 response: 00:12:37.816 { 00:12:37.816 "code": -17, 00:12:37.816 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:37.816 } 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.816 [2024-11-08 16:54:07.318212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:37.816 [2024-11-08 16:54:07.318305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.816 [2024-11-08 16:54:07.318333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:37.816 [2024-11-08 16:54:07.318343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.816 [2024-11-08 16:54:07.320983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.816 [2024-11-08 16:54:07.321031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:37.816 [2024-11-08 16:54:07.321133] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:37.816 [2024-11-08 16:54:07.321176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:37.816 pt1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.816 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.075 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.075 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.075 "name": "raid_bdev1", 00:12:38.075 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:38.075 "strip_size_kb": 0, 00:12:38.075 "state": "configuring", 00:12:38.075 "raid_level": "raid1", 00:12:38.075 "superblock": true, 00:12:38.075 "num_base_bdevs": 4, 00:12:38.075 "num_base_bdevs_discovered": 1, 00:12:38.075 "num_base_bdevs_operational": 4, 00:12:38.075 "base_bdevs_list": [ 00:12:38.075 { 00:12:38.075 "name": "pt1", 00:12:38.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.075 "is_configured": true, 00:12:38.075 "data_offset": 2048, 00:12:38.075 "data_size": 63488 00:12:38.075 }, 00:12:38.075 { 00:12:38.075 "name": null, 00:12:38.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.075 "is_configured": false, 00:12:38.075 "data_offset": 2048, 00:12:38.075 "data_size": 63488 00:12:38.075 }, 00:12:38.075 { 00:12:38.075 "name": null, 00:12:38.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.075 "is_configured": false, 00:12:38.075 "data_offset": 2048, 00:12:38.075 "data_size": 63488 00:12:38.075 }, 00:12:38.075 { 00:12:38.075 "name": null, 00:12:38.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.075 "is_configured": false, 00:12:38.075 "data_offset": 2048, 00:12:38.075 "data_size": 63488 00:12:38.075 } 00:12:38.075 ] 00:12:38.075 }' 00:12:38.075 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.075 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.333 [2024-11-08 16:54:07.745481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:38.333 [2024-11-08 16:54:07.745561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.333 [2024-11-08 16:54:07.745587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:38.333 [2024-11-08 16:54:07.745597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.333 [2024-11-08 16:54:07.746070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.333 [2024-11-08 16:54:07.746098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:38.333 [2024-11-08 16:54:07.746188] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:38.333 [2024-11-08 16:54:07.746228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:38.333 pt2 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.333 [2024-11-08 16:54:07.753506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.333 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.333 "name": "raid_bdev1", 00:12:38.333 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:38.333 "strip_size_kb": 0, 00:12:38.333 "state": "configuring", 00:12:38.333 "raid_level": "raid1", 00:12:38.333 "superblock": true, 00:12:38.333 "num_base_bdevs": 4, 00:12:38.333 "num_base_bdevs_discovered": 1, 00:12:38.333 "num_base_bdevs_operational": 4, 00:12:38.333 "base_bdevs_list": [ 00:12:38.333 { 00:12:38.333 "name": "pt1", 00:12:38.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.333 "is_configured": true, 00:12:38.334 "data_offset": 2048, 00:12:38.334 "data_size": 63488 00:12:38.334 }, 00:12:38.334 { 00:12:38.334 "name": null, 00:12:38.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.334 "is_configured": false, 00:12:38.334 "data_offset": 0, 00:12:38.334 "data_size": 63488 00:12:38.334 }, 00:12:38.334 { 00:12:38.334 "name": null, 00:12:38.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.334 "is_configured": false, 00:12:38.334 "data_offset": 2048, 00:12:38.334 "data_size": 63488 00:12:38.334 }, 00:12:38.334 { 00:12:38.334 "name": null, 00:12:38.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.334 "is_configured": false, 00:12:38.334 "data_offset": 2048, 00:12:38.334 "data_size": 63488 00:12:38.334 } 00:12:38.334 ] 00:12:38.334 }' 00:12:38.334 16:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.334 16:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.901 [2024-11-08 16:54:08.188790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:38.901 [2024-11-08 16:54:08.188875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.901 [2024-11-08 16:54:08.188899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:38.901 [2024-11-08 16:54:08.188911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.901 [2024-11-08 16:54:08.189374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.901 [2024-11-08 16:54:08.189410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:38.901 [2024-11-08 16:54:08.189497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:38.901 [2024-11-08 16:54:08.189532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:38.901 pt2 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.901 [2024-11-08 16:54:08.196743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:38.901 [2024-11-08 16:54:08.196828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.901 [2024-11-08 16:54:08.196856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:38.901 [2024-11-08 16:54:08.196872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.901 [2024-11-08 16:54:08.197360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.901 [2024-11-08 16:54:08.197397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:38.901 [2024-11-08 16:54:08.197488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:38.901 [2024-11-08 16:54:08.197524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:38.901 pt3 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.901 [2024-11-08 16:54:08.204753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:38.901 [2024-11-08 16:54:08.204834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.901 [2024-11-08 16:54:08.204857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:38.901 [2024-11-08 16:54:08.204872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.901 [2024-11-08 16:54:08.205333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.901 [2024-11-08 16:54:08.205371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:38.901 [2024-11-08 16:54:08.205459] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:38.901 [2024-11-08 16:54:08.205494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:38.901 [2024-11-08 16:54:08.205627] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:38.901 [2024-11-08 16:54:08.205669] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.901 [2024-11-08 16:54:08.205972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:38.901 [2024-11-08 16:54:08.206133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:38.901 [2024-11-08 16:54:08.206152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:38.901 [2024-11-08 16:54:08.206285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.901 pt4 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.901 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.902 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.902 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.902 "name": "raid_bdev1", 00:12:38.902 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:38.902 "strip_size_kb": 0, 00:12:38.902 "state": "online", 00:12:38.902 "raid_level": "raid1", 00:12:38.902 "superblock": true, 00:12:38.902 "num_base_bdevs": 4, 00:12:38.902 "num_base_bdevs_discovered": 4, 00:12:38.902 "num_base_bdevs_operational": 4, 00:12:38.902 "base_bdevs_list": [ 00:12:38.902 { 00:12:38.902 "name": "pt1", 00:12:38.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.902 "is_configured": true, 00:12:38.902 "data_offset": 2048, 00:12:38.902 "data_size": 63488 00:12:38.902 }, 00:12:38.902 { 00:12:38.902 "name": "pt2", 00:12:38.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.902 "is_configured": true, 00:12:38.902 "data_offset": 2048, 00:12:38.902 "data_size": 63488 00:12:38.902 }, 00:12:38.902 { 00:12:38.902 "name": "pt3", 00:12:38.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.902 "is_configured": true, 00:12:38.902 "data_offset": 2048, 00:12:38.902 "data_size": 63488 00:12:38.902 }, 00:12:38.902 { 00:12:38.902 "name": "pt4", 00:12:38.902 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.902 "is_configured": true, 00:12:38.902 "data_offset": 2048, 00:12:38.902 "data_size": 63488 00:12:38.902 } 00:12:38.902 ] 00:12:38.902 }' 00:12:38.902 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.902 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:39.163 [2024-11-08 16:54:08.632418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:39.163 "name": "raid_bdev1", 00:12:39.163 "aliases": [ 00:12:39.163 "3fc1602d-c31f-47cf-a44e-d39f7bb66603" 00:12:39.163 ], 00:12:39.163 "product_name": "Raid Volume", 00:12:39.163 "block_size": 512, 00:12:39.163 "num_blocks": 63488, 00:12:39.163 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:39.163 "assigned_rate_limits": { 00:12:39.163 "rw_ios_per_sec": 0, 00:12:39.163 "rw_mbytes_per_sec": 0, 00:12:39.163 "r_mbytes_per_sec": 0, 00:12:39.163 "w_mbytes_per_sec": 0 00:12:39.163 }, 00:12:39.163 "claimed": false, 00:12:39.163 "zoned": false, 00:12:39.163 "supported_io_types": { 00:12:39.163 "read": true, 00:12:39.163 "write": true, 00:12:39.163 "unmap": false, 00:12:39.163 "flush": false, 00:12:39.163 "reset": true, 00:12:39.163 "nvme_admin": false, 00:12:39.163 "nvme_io": false, 00:12:39.163 "nvme_io_md": false, 00:12:39.163 "write_zeroes": true, 00:12:39.163 "zcopy": false, 00:12:39.163 "get_zone_info": false, 00:12:39.163 "zone_management": false, 00:12:39.163 "zone_append": false, 00:12:39.163 "compare": false, 00:12:39.163 "compare_and_write": false, 00:12:39.163 "abort": false, 00:12:39.163 "seek_hole": false, 00:12:39.163 "seek_data": false, 00:12:39.163 "copy": false, 00:12:39.163 "nvme_iov_md": false 00:12:39.163 }, 00:12:39.163 "memory_domains": [ 00:12:39.163 { 00:12:39.163 "dma_device_id": "system", 00:12:39.163 "dma_device_type": 1 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.163 "dma_device_type": 2 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "system", 00:12:39.163 "dma_device_type": 1 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.163 "dma_device_type": 2 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "system", 00:12:39.163 "dma_device_type": 1 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.163 "dma_device_type": 2 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "system", 00:12:39.163 "dma_device_type": 1 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.163 "dma_device_type": 2 00:12:39.163 } 00:12:39.163 ], 00:12:39.163 "driver_specific": { 00:12:39.163 "raid": { 00:12:39.163 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:39.163 "strip_size_kb": 0, 00:12:39.163 "state": "online", 00:12:39.163 "raid_level": "raid1", 00:12:39.163 "superblock": true, 00:12:39.163 "num_base_bdevs": 4, 00:12:39.163 "num_base_bdevs_discovered": 4, 00:12:39.163 "num_base_bdevs_operational": 4, 00:12:39.163 "base_bdevs_list": [ 00:12:39.163 { 00:12:39.163 "name": "pt1", 00:12:39.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:39.163 "is_configured": true, 00:12:39.163 "data_offset": 2048, 00:12:39.163 "data_size": 63488 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "name": "pt2", 00:12:39.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.163 "is_configured": true, 00:12:39.163 "data_offset": 2048, 00:12:39.163 "data_size": 63488 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "name": "pt3", 00:12:39.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.163 "is_configured": true, 00:12:39.163 "data_offset": 2048, 00:12:39.163 "data_size": 63488 00:12:39.163 }, 00:12:39.163 { 00:12:39.163 "name": "pt4", 00:12:39.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.163 "is_configured": true, 00:12:39.163 "data_offset": 2048, 00:12:39.163 "data_size": 63488 00:12:39.163 } 00:12:39.163 ] 00:12:39.163 } 00:12:39.163 } 00:12:39.163 }' 00:12:39.163 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:39.423 pt2 00:12:39.423 pt3 00:12:39.423 pt4' 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.423 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:39.424 [2024-11-08 16:54:08.927916] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.424 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3fc1602d-c31f-47cf-a44e-d39f7bb66603 '!=' 3fc1602d-c31f-47cf-a44e-d39f7bb66603 ']' 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.682 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.682 [2024-11-08 16:54:08.983518] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.683 16:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.683 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.683 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.683 "name": "raid_bdev1", 00:12:39.683 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:39.683 "strip_size_kb": 0, 00:12:39.683 "state": "online", 00:12:39.683 "raid_level": "raid1", 00:12:39.683 "superblock": true, 00:12:39.683 "num_base_bdevs": 4, 00:12:39.683 "num_base_bdevs_discovered": 3, 00:12:39.683 "num_base_bdevs_operational": 3, 00:12:39.683 "base_bdevs_list": [ 00:12:39.683 { 00:12:39.683 "name": null, 00:12:39.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.683 "is_configured": false, 00:12:39.683 "data_offset": 0, 00:12:39.683 "data_size": 63488 00:12:39.683 }, 00:12:39.683 { 00:12:39.683 "name": "pt2", 00:12:39.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.683 "is_configured": true, 00:12:39.683 "data_offset": 2048, 00:12:39.683 "data_size": 63488 00:12:39.683 }, 00:12:39.683 { 00:12:39.683 "name": "pt3", 00:12:39.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.683 "is_configured": true, 00:12:39.683 "data_offset": 2048, 00:12:39.683 "data_size": 63488 00:12:39.683 }, 00:12:39.683 { 00:12:39.683 "name": "pt4", 00:12:39.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.683 "is_configured": true, 00:12:39.683 "data_offset": 2048, 00:12:39.683 "data_size": 63488 00:12:39.683 } 00:12:39.683 ] 00:12:39.683 }' 00:12:39.683 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.683 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.942 [2024-11-08 16:54:09.414777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.942 [2024-11-08 16:54:09.414821] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.942 [2024-11-08 16:54:09.414925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.942 [2024-11-08 16:54:09.415009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.942 [2024-11-08 16:54:09.415023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:39.942 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.201 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.201 [2024-11-08 16:54:09.518629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.201 [2024-11-08 16:54:09.518802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.201 [2024-11-08 16:54:09.518859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:40.201 [2024-11-08 16:54:09.518899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.201 [2024-11-08 16:54:09.521495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.201 [2024-11-08 16:54:09.521600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.202 [2024-11-08 16:54:09.521752] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:40.202 [2024-11-08 16:54:09.521833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.202 pt2 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.202 "name": "raid_bdev1", 00:12:40.202 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:40.202 "strip_size_kb": 0, 00:12:40.202 "state": "configuring", 00:12:40.202 "raid_level": "raid1", 00:12:40.202 "superblock": true, 00:12:40.202 "num_base_bdevs": 4, 00:12:40.202 "num_base_bdevs_discovered": 1, 00:12:40.202 "num_base_bdevs_operational": 3, 00:12:40.202 "base_bdevs_list": [ 00:12:40.202 { 00:12:40.202 "name": null, 00:12:40.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.202 "is_configured": false, 00:12:40.202 "data_offset": 2048, 00:12:40.202 "data_size": 63488 00:12:40.202 }, 00:12:40.202 { 00:12:40.202 "name": "pt2", 00:12:40.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.202 "is_configured": true, 00:12:40.202 "data_offset": 2048, 00:12:40.202 "data_size": 63488 00:12:40.202 }, 00:12:40.202 { 00:12:40.202 "name": null, 00:12:40.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.202 "is_configured": false, 00:12:40.202 "data_offset": 2048, 00:12:40.202 "data_size": 63488 00:12:40.202 }, 00:12:40.202 { 00:12:40.202 "name": null, 00:12:40.202 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.202 "is_configured": false, 00:12:40.202 "data_offset": 2048, 00:12:40.202 "data_size": 63488 00:12:40.202 } 00:12:40.202 ] 00:12:40.202 }' 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.202 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.461 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:40.461 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.461 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.461 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.461 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.461 [2024-11-08 16:54:09.986069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.461 [2024-11-08 16:54:09.986209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.461 [2024-11-08 16:54:09.986257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:40.461 [2024-11-08 16:54:09.986288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.461 [2024-11-08 16:54:09.987263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.461 [2024-11-08 16:54:09.987372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.461 [2024-11-08 16:54:09.987577] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:40.721 [2024-11-08 16:54:09.987701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.721 pt3 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.721 16:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.721 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.721 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.721 "name": "raid_bdev1", 00:12:40.721 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:40.721 "strip_size_kb": 0, 00:12:40.721 "state": "configuring", 00:12:40.721 "raid_level": "raid1", 00:12:40.721 "superblock": true, 00:12:40.721 "num_base_bdevs": 4, 00:12:40.721 "num_base_bdevs_discovered": 2, 00:12:40.721 "num_base_bdevs_operational": 3, 00:12:40.721 "base_bdevs_list": [ 00:12:40.721 { 00:12:40.721 "name": null, 00:12:40.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.721 "is_configured": false, 00:12:40.721 "data_offset": 2048, 00:12:40.721 "data_size": 63488 00:12:40.721 }, 00:12:40.721 { 00:12:40.721 "name": "pt2", 00:12:40.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.721 "is_configured": true, 00:12:40.721 "data_offset": 2048, 00:12:40.721 "data_size": 63488 00:12:40.721 }, 00:12:40.721 { 00:12:40.721 "name": "pt3", 00:12:40.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.721 "is_configured": true, 00:12:40.721 "data_offset": 2048, 00:12:40.721 "data_size": 63488 00:12:40.721 }, 00:12:40.721 { 00:12:40.721 "name": null, 00:12:40.721 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.721 "is_configured": false, 00:12:40.721 "data_offset": 2048, 00:12:40.721 "data_size": 63488 00:12:40.721 } 00:12:40.721 ] 00:12:40.721 }' 00:12:40.721 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.721 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.980 [2024-11-08 16:54:10.457034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:40.980 [2024-11-08 16:54:10.457197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.980 [2024-11-08 16:54:10.457269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:40.980 [2024-11-08 16:54:10.457307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.980 [2024-11-08 16:54:10.457811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.980 [2024-11-08 16:54:10.457879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:40.980 [2024-11-08 16:54:10.458004] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:40.980 [2024-11-08 16:54:10.458076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:40.980 [2024-11-08 16:54:10.458231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:40.980 [2024-11-08 16:54:10.458276] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.980 [2024-11-08 16:54:10.458568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:40.980 [2024-11-08 16:54:10.458772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:40.980 [2024-11-08 16:54:10.458819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:12:40.980 [2024-11-08 16:54:10.458990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.980 pt4 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.980 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.240 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.240 "name": "raid_bdev1", 00:12:41.240 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:41.240 "strip_size_kb": 0, 00:12:41.240 "state": "online", 00:12:41.240 "raid_level": "raid1", 00:12:41.240 "superblock": true, 00:12:41.240 "num_base_bdevs": 4, 00:12:41.240 "num_base_bdevs_discovered": 3, 00:12:41.240 "num_base_bdevs_operational": 3, 00:12:41.240 "base_bdevs_list": [ 00:12:41.240 { 00:12:41.240 "name": null, 00:12:41.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.240 "is_configured": false, 00:12:41.240 "data_offset": 2048, 00:12:41.240 "data_size": 63488 00:12:41.240 }, 00:12:41.240 { 00:12:41.240 "name": "pt2", 00:12:41.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.240 "is_configured": true, 00:12:41.240 "data_offset": 2048, 00:12:41.240 "data_size": 63488 00:12:41.240 }, 00:12:41.240 { 00:12:41.240 "name": "pt3", 00:12:41.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.240 "is_configured": true, 00:12:41.240 "data_offset": 2048, 00:12:41.240 "data_size": 63488 00:12:41.240 }, 00:12:41.240 { 00:12:41.240 "name": "pt4", 00:12:41.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.240 "is_configured": true, 00:12:41.240 "data_offset": 2048, 00:12:41.240 "data_size": 63488 00:12:41.240 } 00:12:41.240 ] 00:12:41.240 }' 00:12:41.240 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.240 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.500 [2024-11-08 16:54:10.940309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.500 [2024-11-08 16:54:10.940352] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.500 [2024-11-08 16:54:10.940451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.500 [2024-11-08 16:54:10.940538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.500 [2024-11-08 16:54:10.940550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.500 16:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.500 [2024-11-08 16:54:11.016227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.500 [2024-11-08 16:54:11.016317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.500 [2024-11-08 16:54:11.016349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:41.500 [2024-11-08 16:54:11.016360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.500 [2024-11-08 16:54:11.019214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.500 [2024-11-08 16:54:11.019269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.500 [2024-11-08 16:54:11.019375] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:41.500 [2024-11-08 16:54:11.019430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.500 [2024-11-08 16:54:11.019575] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:41.500 [2024-11-08 16:54:11.019591] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.500 [2024-11-08 16:54:11.019617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:12:41.500 [2024-11-08 16:54:11.019693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.500 [2024-11-08 16:54:11.019808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:41.500 pt1 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.500 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.759 "name": "raid_bdev1", 00:12:41.759 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:41.759 "strip_size_kb": 0, 00:12:41.759 "state": "configuring", 00:12:41.759 "raid_level": "raid1", 00:12:41.759 "superblock": true, 00:12:41.759 "num_base_bdevs": 4, 00:12:41.759 "num_base_bdevs_discovered": 2, 00:12:41.759 "num_base_bdevs_operational": 3, 00:12:41.759 "base_bdevs_list": [ 00:12:41.759 { 00:12:41.759 "name": null, 00:12:41.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.759 "is_configured": false, 00:12:41.759 "data_offset": 2048, 00:12:41.759 "data_size": 63488 00:12:41.759 }, 00:12:41.759 { 00:12:41.759 "name": "pt2", 00:12:41.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.759 "is_configured": true, 00:12:41.759 "data_offset": 2048, 00:12:41.759 "data_size": 63488 00:12:41.759 }, 00:12:41.759 { 00:12:41.759 "name": "pt3", 00:12:41.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.759 "is_configured": true, 00:12:41.759 "data_offset": 2048, 00:12:41.759 "data_size": 63488 00:12:41.759 }, 00:12:41.759 { 00:12:41.759 "name": null, 00:12:41.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.759 "is_configured": false, 00:12:41.759 "data_offset": 2048, 00:12:41.759 "data_size": 63488 00:12:41.759 } 00:12:41.759 ] 00:12:41.759 }' 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.759 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.018 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:42.018 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:42.018 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.018 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.018 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.277 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:42.277 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:42.277 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.278 [2024-11-08 16:54:11.567298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:42.278 [2024-11-08 16:54:11.567428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.278 [2024-11-08 16:54:11.567474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:42.278 [2024-11-08 16:54:11.567554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.278 [2024-11-08 16:54:11.568100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.278 [2024-11-08 16:54:11.568176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:42.278 [2024-11-08 16:54:11.568291] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:42.278 [2024-11-08 16:54:11.568355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:42.278 [2024-11-08 16:54:11.568506] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:42.278 [2024-11-08 16:54:11.568566] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.278 [2024-11-08 16:54:11.568881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:42.278 [2024-11-08 16:54:11.569086] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:42.278 [2024-11-08 16:54:11.569131] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:42.278 [2024-11-08 16:54:11.569304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.278 pt4 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.278 "name": "raid_bdev1", 00:12:42.278 "uuid": "3fc1602d-c31f-47cf-a44e-d39f7bb66603", 00:12:42.278 "strip_size_kb": 0, 00:12:42.278 "state": "online", 00:12:42.278 "raid_level": "raid1", 00:12:42.278 "superblock": true, 00:12:42.278 "num_base_bdevs": 4, 00:12:42.278 "num_base_bdevs_discovered": 3, 00:12:42.278 "num_base_bdevs_operational": 3, 00:12:42.278 "base_bdevs_list": [ 00:12:42.278 { 00:12:42.278 "name": null, 00:12:42.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.278 "is_configured": false, 00:12:42.278 "data_offset": 2048, 00:12:42.278 "data_size": 63488 00:12:42.278 }, 00:12:42.278 { 00:12:42.278 "name": "pt2", 00:12:42.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.278 "is_configured": true, 00:12:42.278 "data_offset": 2048, 00:12:42.278 "data_size": 63488 00:12:42.278 }, 00:12:42.278 { 00:12:42.278 "name": "pt3", 00:12:42.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.278 "is_configured": true, 00:12:42.278 "data_offset": 2048, 00:12:42.278 "data_size": 63488 00:12:42.278 }, 00:12:42.278 { 00:12:42.278 "name": "pt4", 00:12:42.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.278 "is_configured": true, 00:12:42.278 "data_offset": 2048, 00:12:42.278 "data_size": 63488 00:12:42.278 } 00:12:42.278 ] 00:12:42.278 }' 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.278 16:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.538 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:42.538 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.538 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:42.538 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:42.797 [2024-11-08 16:54:12.114907] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3fc1602d-c31f-47cf-a44e-d39f7bb66603 '!=' 3fc1602d-c31f-47cf-a44e-d39f7bb66603 ']' 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85298 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85298 ']' 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85298 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85298 00:12:42.797 killing process with pid 85298 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85298' 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85298 00:12:42.797 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85298 00:12:42.797 [2024-11-08 16:54:12.200546] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.797 [2024-11-08 16:54:12.200670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.797 [2024-11-08 16:54:12.200764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.797 [2024-11-08 16:54:12.200781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:42.797 [2024-11-08 16:54:12.251518] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.055 16:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:43.055 00:12:43.055 real 0m7.320s 00:12:43.055 user 0m12.339s 00:12:43.055 sys 0m1.549s 00:12:43.055 ************************************ 00:12:43.055 END TEST raid_superblock_test 00:12:43.055 ************************************ 00:12:43.055 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.055 16:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.055 16:54:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:43.055 16:54:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:43.055 16:54:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.055 16:54:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.055 ************************************ 00:12:43.055 START TEST raid_read_error_test 00:12:43.055 ************************************ 00:12:43.055 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:43.055 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:43.055 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:43.055 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QbArt7bDN3 00:12:43.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85774 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85774 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85774 ']' 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.313 16:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.313 [2024-11-08 16:54:12.687817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:43.313 [2024-11-08 16:54:12.687968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85774 ] 00:12:43.572 [2024-11-08 16:54:12.852798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.572 [2024-11-08 16:54:12.908588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.572 [2024-11-08 16:54:12.955585] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.572 [2024-11-08 16:54:12.955625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.139 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.139 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:44.139 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.139 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.139 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 BaseBdev1_malloc 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 true 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 [2024-11-08 16:54:13.636592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.140 [2024-11-08 16:54:13.636676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.140 [2024-11-08 16:54:13.636714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.140 [2024-11-08 16:54:13.636730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.140 [2024-11-08 16:54:13.639032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.140 [2024-11-08 16:54:13.639071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.140 BaseBdev1 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.140 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 BaseBdev2_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 true 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 [2024-11-08 16:54:13.685339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.399 [2024-11-08 16:54:13.685460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.399 [2024-11-08 16:54:13.685511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.399 [2024-11-08 16:54:13.685560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.399 [2024-11-08 16:54:13.687823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.399 [2024-11-08 16:54:13.687901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.399 BaseBdev2 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 BaseBdev3_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 true 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 [2024-11-08 16:54:13.726391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.399 [2024-11-08 16:54:13.726506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.399 [2024-11-08 16:54:13.726548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.399 [2024-11-08 16:54:13.726579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.399 [2024-11-08 16:54:13.729108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.399 [2024-11-08 16:54:13.729188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.399 BaseBdev3 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 BaseBdev4_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 true 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 [2024-11-08 16:54:13.767941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:44.399 [2024-11-08 16:54:13.768069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.399 [2024-11-08 16:54:13.768119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.399 [2024-11-08 16:54:13.768206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.399 [2024-11-08 16:54:13.770902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.399 [2024-11-08 16:54:13.770988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.399 BaseBdev4 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 [2024-11-08 16:54:13.780014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.399 [2024-11-08 16:54:13.782438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.399 [2024-11-08 16:54:13.782591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.399 [2024-11-08 16:54:13.782760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.399 [2024-11-08 16:54:13.783096] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:44.399 [2024-11-08 16:54:13.783195] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.399 [2024-11-08 16:54:13.783666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:44.399 [2024-11-08 16:54:13.783943] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:44.399 [2024-11-08 16:54:13.784028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:44.399 [2024-11-08 16:54:13.784392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.399 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.400 "name": "raid_bdev1", 00:12:44.400 "uuid": "9227f6f3-c1a3-4dac-bd86-8064a93c7ae7", 00:12:44.400 "strip_size_kb": 0, 00:12:44.400 "state": "online", 00:12:44.400 "raid_level": "raid1", 00:12:44.400 "superblock": true, 00:12:44.400 "num_base_bdevs": 4, 00:12:44.400 "num_base_bdevs_discovered": 4, 00:12:44.400 "num_base_bdevs_operational": 4, 00:12:44.400 "base_bdevs_list": [ 00:12:44.400 { 00:12:44.400 "name": "BaseBdev1", 00:12:44.400 "uuid": "3fc53e21-b891-5bb1-a955-a83d4011cc1b", 00:12:44.400 "is_configured": true, 00:12:44.400 "data_offset": 2048, 00:12:44.400 "data_size": 63488 00:12:44.400 }, 00:12:44.400 { 00:12:44.400 "name": "BaseBdev2", 00:12:44.400 "uuid": "3cd43e91-c999-5a62-8429-ea9bb0703e2f", 00:12:44.400 "is_configured": true, 00:12:44.400 "data_offset": 2048, 00:12:44.400 "data_size": 63488 00:12:44.400 }, 00:12:44.400 { 00:12:44.400 "name": "BaseBdev3", 00:12:44.400 "uuid": "a97bc160-cd6f-55e2-9001-fa659ee2469d", 00:12:44.400 "is_configured": true, 00:12:44.400 "data_offset": 2048, 00:12:44.400 "data_size": 63488 00:12:44.400 }, 00:12:44.400 { 00:12:44.400 "name": "BaseBdev4", 00:12:44.400 "uuid": "f193363f-9fb1-5754-a5c0-aebdb96e1c15", 00:12:44.400 "is_configured": true, 00:12:44.400 "data_offset": 2048, 00:12:44.400 "data_size": 63488 00:12:44.400 } 00:12:44.400 ] 00:12:44.400 }' 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.400 16:54:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.967 16:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:44.967 16:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:44.967 [2024-11-08 16:54:14.351840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.901 "name": "raid_bdev1", 00:12:45.901 "uuid": "9227f6f3-c1a3-4dac-bd86-8064a93c7ae7", 00:12:45.901 "strip_size_kb": 0, 00:12:45.901 "state": "online", 00:12:45.901 "raid_level": "raid1", 00:12:45.901 "superblock": true, 00:12:45.901 "num_base_bdevs": 4, 00:12:45.901 "num_base_bdevs_discovered": 4, 00:12:45.901 "num_base_bdevs_operational": 4, 00:12:45.901 "base_bdevs_list": [ 00:12:45.901 { 00:12:45.901 "name": "BaseBdev1", 00:12:45.901 "uuid": "3fc53e21-b891-5bb1-a955-a83d4011cc1b", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 2048, 00:12:45.901 "data_size": 63488 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "name": "BaseBdev2", 00:12:45.901 "uuid": "3cd43e91-c999-5a62-8429-ea9bb0703e2f", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 2048, 00:12:45.901 "data_size": 63488 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "name": "BaseBdev3", 00:12:45.901 "uuid": "a97bc160-cd6f-55e2-9001-fa659ee2469d", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 2048, 00:12:45.901 "data_size": 63488 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "name": "BaseBdev4", 00:12:45.901 "uuid": "f193363f-9fb1-5754-a5c0-aebdb96e1c15", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 2048, 00:12:45.901 "data_size": 63488 00:12:45.901 } 00:12:45.901 ] 00:12:45.901 }' 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.901 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 [2024-11-08 16:54:15.773116] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.471 [2024-11-08 16:54:15.773228] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.471 [2024-11-08 16:54:15.776444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.471 [2024-11-08 16:54:15.776567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.471 [2024-11-08 16:54:15.776751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.471 [2024-11-08 16:54:15.776809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:46.471 { 00:12:46.471 "results": [ 00:12:46.471 { 00:12:46.471 "job": "raid_bdev1", 00:12:46.471 "core_mask": "0x1", 00:12:46.471 "workload": "randrw", 00:12:46.471 "percentage": 50, 00:12:46.471 "status": "finished", 00:12:46.471 "queue_depth": 1, 00:12:46.471 "io_size": 131072, 00:12:46.471 "runtime": 1.421942, 00:12:46.471 "iops": 9600.250924439955, 00:12:46.471 "mibps": 1200.0313655549944, 00:12:46.471 "io_failed": 0, 00:12:46.471 "io_timeout": 0, 00:12:46.471 "avg_latency_us": 101.02494786600083, 00:12:46.471 "min_latency_us": 25.9353711790393, 00:12:46.471 "max_latency_us": 1860.1921397379913 00:12:46.471 } 00:12:46.471 ], 00:12:46.471 "core_count": 1 00:12:46.471 } 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85774 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85774 ']' 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85774 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85774 00:12:46.471 killing process with pid 85774 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.471 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.472 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85774' 00:12:46.472 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85774 00:12:46.472 [2024-11-08 16:54:15.823226] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.472 16:54:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85774 00:12:46.472 [2024-11-08 16:54:15.861979] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QbArt7bDN3 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:46.732 00:12:46.732 real 0m3.547s 00:12:46.732 user 0m4.583s 00:12:46.732 sys 0m0.584s 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.732 16:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.732 ************************************ 00:12:46.732 END TEST raid_read_error_test 00:12:46.732 ************************************ 00:12:46.732 16:54:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:46.732 16:54:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:46.732 16:54:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.732 16:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.732 ************************************ 00:12:46.732 START TEST raid_write_error_test 00:12:46.732 ************************************ 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XRYJqWYhce 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85909 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85909 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85909 ']' 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.732 16:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.992 [2024-11-08 16:54:16.315158] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:46.992 [2024-11-08 16:54:16.315311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85909 ] 00:12:46.992 [2024-11-08 16:54:16.489612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.250 [2024-11-08 16:54:16.544902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.250 [2024-11-08 16:54:16.590902] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.250 [2024-11-08 16:54:16.590958] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.818 BaseBdev1_malloc 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:47.818 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 true 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 [2024-11-08 16:54:17.264439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:47.819 [2024-11-08 16:54:17.264518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.819 [2024-11-08 16:54:17.264548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:47.819 [2024-11-08 16:54:17.264561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.819 [2024-11-08 16:54:17.266994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.819 [2024-11-08 16:54:17.267158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.819 BaseBdev1 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 BaseBdev2_malloc 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 true 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 [2024-11-08 16:54:17.316011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:47.819 [2024-11-08 16:54:17.316088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.819 [2024-11-08 16:54:17.316115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:47.819 [2024-11-08 16:54:17.316126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.819 [2024-11-08 16:54:17.318921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.819 [2024-11-08 16:54:17.319050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.819 BaseBdev2 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 BaseBdev3_malloc 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.819 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 true 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 [2024-11-08 16:54:17.357764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:48.078 [2024-11-08 16:54:17.357924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.078 [2024-11-08 16:54:17.357958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:48.078 [2024-11-08 16:54:17.357970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.078 [2024-11-08 16:54:17.360491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.078 [2024-11-08 16:54:17.360542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:48.078 BaseBdev3 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 BaseBdev4_malloc 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 true 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 [2024-11-08 16:54:17.399528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:48.078 [2024-11-08 16:54:17.399671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.078 [2024-11-08 16:54:17.399723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:48.078 [2024-11-08 16:54:17.399762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.078 [2024-11-08 16:54:17.402173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.078 [2024-11-08 16:54:17.402271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:48.078 BaseBdev4 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 [2024-11-08 16:54:17.411569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.078 [2024-11-08 16:54:17.413814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.078 [2024-11-08 16:54:17.413919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.078 [2024-11-08 16:54:17.413984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.078 [2024-11-08 16:54:17.414219] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:48.078 [2024-11-08 16:54:17.414233] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:48.078 [2024-11-08 16:54:17.414566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:48.078 [2024-11-08 16:54:17.414771] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:48.078 [2024-11-08 16:54:17.414787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:48.078 [2024-11-08 16:54:17.414964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.078 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.079 "name": "raid_bdev1", 00:12:48.079 "uuid": "f5260f72-3bcc-4e77-81d2-b7d929a5df34", 00:12:48.079 "strip_size_kb": 0, 00:12:48.079 "state": "online", 00:12:48.079 "raid_level": "raid1", 00:12:48.079 "superblock": true, 00:12:48.079 "num_base_bdevs": 4, 00:12:48.079 "num_base_bdevs_discovered": 4, 00:12:48.079 "num_base_bdevs_operational": 4, 00:12:48.079 "base_bdevs_list": [ 00:12:48.079 { 00:12:48.079 "name": "BaseBdev1", 00:12:48.079 "uuid": "b76adf52-afef-51d6-a541-bf93ff94be90", 00:12:48.079 "is_configured": true, 00:12:48.079 "data_offset": 2048, 00:12:48.079 "data_size": 63488 00:12:48.079 }, 00:12:48.079 { 00:12:48.079 "name": "BaseBdev2", 00:12:48.079 "uuid": "b8da4168-e068-5e3b-bb89-1fd8d631269d", 00:12:48.079 "is_configured": true, 00:12:48.079 "data_offset": 2048, 00:12:48.079 "data_size": 63488 00:12:48.079 }, 00:12:48.079 { 00:12:48.079 "name": "BaseBdev3", 00:12:48.079 "uuid": "68c1c75c-6224-519f-bf25-5204a0d6f4fc", 00:12:48.079 "is_configured": true, 00:12:48.079 "data_offset": 2048, 00:12:48.079 "data_size": 63488 00:12:48.079 }, 00:12:48.079 { 00:12:48.079 "name": "BaseBdev4", 00:12:48.079 "uuid": "488eb98c-4e89-5f98-9435-18b77abfc8b5", 00:12:48.079 "is_configured": true, 00:12:48.079 "data_offset": 2048, 00:12:48.079 "data_size": 63488 00:12:48.079 } 00:12:48.079 ] 00:12:48.079 }' 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.079 16:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.647 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:48.647 16:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:48.647 [2024-11-08 16:54:17.955362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.594 [2024-11-08 16:54:18.883663] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:49.594 [2024-11-08 16:54:18.883838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.594 [2024-11-08 16:54:18.884119] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.594 "name": "raid_bdev1", 00:12:49.594 "uuid": "f5260f72-3bcc-4e77-81d2-b7d929a5df34", 00:12:49.594 "strip_size_kb": 0, 00:12:49.594 "state": "online", 00:12:49.594 "raid_level": "raid1", 00:12:49.594 "superblock": true, 00:12:49.594 "num_base_bdevs": 4, 00:12:49.594 "num_base_bdevs_discovered": 3, 00:12:49.594 "num_base_bdevs_operational": 3, 00:12:49.594 "base_bdevs_list": [ 00:12:49.594 { 00:12:49.594 "name": null, 00:12:49.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.594 "is_configured": false, 00:12:49.594 "data_offset": 0, 00:12:49.594 "data_size": 63488 00:12:49.594 }, 00:12:49.594 { 00:12:49.594 "name": "BaseBdev2", 00:12:49.594 "uuid": "b8da4168-e068-5e3b-bb89-1fd8d631269d", 00:12:49.594 "is_configured": true, 00:12:49.594 "data_offset": 2048, 00:12:49.594 "data_size": 63488 00:12:49.594 }, 00:12:49.594 { 00:12:49.594 "name": "BaseBdev3", 00:12:49.594 "uuid": "68c1c75c-6224-519f-bf25-5204a0d6f4fc", 00:12:49.594 "is_configured": true, 00:12:49.594 "data_offset": 2048, 00:12:49.594 "data_size": 63488 00:12:49.594 }, 00:12:49.594 { 00:12:49.594 "name": "BaseBdev4", 00:12:49.594 "uuid": "488eb98c-4e89-5f98-9435-18b77abfc8b5", 00:12:49.594 "is_configured": true, 00:12:49.594 "data_offset": 2048, 00:12:49.594 "data_size": 63488 00:12:49.594 } 00:12:49.594 ] 00:12:49.594 }' 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.594 16:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.854 [2024-11-08 16:54:19.328324] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.854 [2024-11-08 16:54:19.328384] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.854 [2024-11-08 16:54:19.331498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.854 [2024-11-08 16:54:19.331568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.854 [2024-11-08 16:54:19.331694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.854 [2024-11-08 16:54:19.331711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:49.854 { 00:12:49.854 "results": [ 00:12:49.854 { 00:12:49.854 "job": "raid_bdev1", 00:12:49.854 "core_mask": "0x1", 00:12:49.854 "workload": "randrw", 00:12:49.854 "percentage": 50, 00:12:49.854 "status": "finished", 00:12:49.854 "queue_depth": 1, 00:12:49.854 "io_size": 131072, 00:12:49.854 "runtime": 1.373074, 00:12:49.854 "iops": 10263.103081115803, 00:12:49.854 "mibps": 1282.8878851394754, 00:12:49.854 "io_failed": 0, 00:12:49.854 "io_timeout": 0, 00:12:49.854 "avg_latency_us": 94.10908676234898, 00:12:49.854 "min_latency_us": 25.3764192139738, 00:12:49.854 "max_latency_us": 1802.955458515284 00:12:49.854 } 00:12:49.854 ], 00:12:49.854 "core_count": 1 00:12:49.854 } 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85909 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85909 ']' 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85909 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85909 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.854 killing process with pid 85909 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85909' 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85909 00:12:49.854 [2024-11-08 16:54:19.379235] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.854 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85909 00:12:50.113 [2024-11-08 16:54:19.417961] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XRYJqWYhce 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:50.372 00:12:50.372 real 0m3.482s 00:12:50.372 user 0m4.383s 00:12:50.372 sys 0m0.606s 00:12:50.372 ************************************ 00:12:50.372 END TEST raid_write_error_test 00:12:50.372 ************************************ 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.372 16:54:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.372 16:54:19 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:50.372 16:54:19 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:50.372 16:54:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:50.372 16:54:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:50.372 16:54:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.372 16:54:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.372 ************************************ 00:12:50.372 START TEST raid_rebuild_test 00:12:50.372 ************************************ 00:12:50.372 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:50.372 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:50.372 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86036 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86036 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86036 ']' 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.373 16:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.373 [2024-11-08 16:54:19.868097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:50.373 [2024-11-08 16:54:19.868361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86036 ] 00:12:50.373 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.373 Zero copy mechanism will not be used. 00:12:50.631 [2024-11-08 16:54:20.019355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.631 [2024-11-08 16:54:20.082920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.631 [2024-11-08 16:54:20.130733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.631 [2024-11-08 16:54:20.130856] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.567 BaseBdev1_malloc 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.567 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 [2024-11-08 16:54:20.823829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.568 [2024-11-08 16:54:20.824030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.568 [2024-11-08 16:54:20.824068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:51.568 [2024-11-08 16:54:20.824096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.568 [2024-11-08 16:54:20.826690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.568 [2024-11-08 16:54:20.826735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.568 BaseBdev1 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 BaseBdev2_malloc 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 [2024-11-08 16:54:20.869382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:51.568 [2024-11-08 16:54:20.869470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.568 [2024-11-08 16:54:20.869497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:51.568 [2024-11-08 16:54:20.869507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.568 [2024-11-08 16:54:20.872173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.568 [2024-11-08 16:54:20.872332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.568 BaseBdev2 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 spare_malloc 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 spare_delay 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 [2024-11-08 16:54:20.910870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.568 [2024-11-08 16:54:20.911044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.568 [2024-11-08 16:54:20.911079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:51.568 [2024-11-08 16:54:20.911091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.568 [2024-11-08 16:54:20.913739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.568 [2024-11-08 16:54:20.913784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.568 spare 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 [2024-11-08 16:54:20.922896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.568 [2024-11-08 16:54:20.925182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.568 [2024-11-08 16:54:20.925391] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:51.568 [2024-11-08 16:54:20.925410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:51.568 [2024-11-08 16:54:20.925759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:51.568 [2024-11-08 16:54:20.925920] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:51.568 [2024-11-08 16:54:20.925934] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:51.568 [2024-11-08 16:54:20.926117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.568 "name": "raid_bdev1", 00:12:51.568 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:12:51.568 "strip_size_kb": 0, 00:12:51.568 "state": "online", 00:12:51.568 "raid_level": "raid1", 00:12:51.568 "superblock": false, 00:12:51.568 "num_base_bdevs": 2, 00:12:51.568 "num_base_bdevs_discovered": 2, 00:12:51.568 "num_base_bdevs_operational": 2, 00:12:51.568 "base_bdevs_list": [ 00:12:51.568 { 00:12:51.568 "name": "BaseBdev1", 00:12:51.568 "uuid": "cf38149f-f6f7-5dd9-a290-21831ef7a4e6", 00:12:51.568 "is_configured": true, 00:12:51.568 "data_offset": 0, 00:12:51.568 "data_size": 65536 00:12:51.568 }, 00:12:51.568 { 00:12:51.568 "name": "BaseBdev2", 00:12:51.568 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:12:51.568 "is_configured": true, 00:12:51.568 "data_offset": 0, 00:12:51.568 "data_size": 65536 00:12:51.568 } 00:12:51.568 ] 00:12:51.568 }' 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.568 16:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.138 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:52.138 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:52.138 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.138 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.139 [2024-11-08 16:54:21.430408] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.139 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:52.400 [2024-11-08 16:54:21.753591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:52.400 /dev/nbd0 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.400 1+0 records in 00:12:52.400 1+0 records out 00:12:52.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472191 s, 8.7 MB/s 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:52.400 16:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:57.675 65536+0 records in 00:12:57.675 65536+0 records out 00:12:57.675 33554432 bytes (34 MB, 32 MiB) copied, 4.50916 s, 7.4 MB/s 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:57.675 [2024-11-08 16:54:26.629913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.675 [2024-11-08 16:54:26.650499] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.675 "name": "raid_bdev1", 00:12:57.675 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:12:57.675 "strip_size_kb": 0, 00:12:57.675 "state": "online", 00:12:57.675 "raid_level": "raid1", 00:12:57.675 "superblock": false, 00:12:57.675 "num_base_bdevs": 2, 00:12:57.675 "num_base_bdevs_discovered": 1, 00:12:57.675 "num_base_bdevs_operational": 1, 00:12:57.675 "base_bdevs_list": [ 00:12:57.675 { 00:12:57.675 "name": null, 00:12:57.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.675 "is_configured": false, 00:12:57.675 "data_offset": 0, 00:12:57.675 "data_size": 65536 00:12:57.675 }, 00:12:57.675 { 00:12:57.675 "name": "BaseBdev2", 00:12:57.675 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:12:57.675 "is_configured": true, 00:12:57.675 "data_offset": 0, 00:12:57.675 "data_size": 65536 00:12:57.675 } 00:12:57.675 ] 00:12:57.675 }' 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.675 16:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.675 16:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.675 16:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.675 16:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.675 [2024-11-08 16:54:27.125745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.675 [2024-11-08 16:54:27.130335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:12:57.675 16:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.675 16:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:57.675 [2024-11-08 16:54:27.132737] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.614 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.614 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.614 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.614 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.614 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.874 "name": "raid_bdev1", 00:12:58.874 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:12:58.874 "strip_size_kb": 0, 00:12:58.874 "state": "online", 00:12:58.874 "raid_level": "raid1", 00:12:58.874 "superblock": false, 00:12:58.874 "num_base_bdevs": 2, 00:12:58.874 "num_base_bdevs_discovered": 2, 00:12:58.874 "num_base_bdevs_operational": 2, 00:12:58.874 "process": { 00:12:58.874 "type": "rebuild", 00:12:58.874 "target": "spare", 00:12:58.874 "progress": { 00:12:58.874 "blocks": 20480, 00:12:58.874 "percent": 31 00:12:58.874 } 00:12:58.874 }, 00:12:58.874 "base_bdevs_list": [ 00:12:58.874 { 00:12:58.874 "name": "spare", 00:12:58.874 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:12:58.874 "is_configured": true, 00:12:58.874 "data_offset": 0, 00:12:58.874 "data_size": 65536 00:12:58.874 }, 00:12:58.874 { 00:12:58.874 "name": "BaseBdev2", 00:12:58.874 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:12:58.874 "is_configured": true, 00:12:58.874 "data_offset": 0, 00:12:58.874 "data_size": 65536 00:12:58.874 } 00:12:58.874 ] 00:12:58.874 }' 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.874 [2024-11-08 16:54:28.297227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.874 [2024-11-08 16:54:28.339153] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:58.874 [2024-11-08 16:54:28.339328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.874 [2024-11-08 16:54:28.339357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.874 [2024-11-08 16:54:28.339367] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.874 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.875 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.875 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.875 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.875 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.133 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.133 "name": "raid_bdev1", 00:12:59.133 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:12:59.133 "strip_size_kb": 0, 00:12:59.133 "state": "online", 00:12:59.133 "raid_level": "raid1", 00:12:59.133 "superblock": false, 00:12:59.133 "num_base_bdevs": 2, 00:12:59.133 "num_base_bdevs_discovered": 1, 00:12:59.133 "num_base_bdevs_operational": 1, 00:12:59.133 "base_bdevs_list": [ 00:12:59.133 { 00:12:59.133 "name": null, 00:12:59.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.133 "is_configured": false, 00:12:59.133 "data_offset": 0, 00:12:59.133 "data_size": 65536 00:12:59.133 }, 00:12:59.133 { 00:12:59.133 "name": "BaseBdev2", 00:12:59.133 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:12:59.133 "is_configured": true, 00:12:59.133 "data_offset": 0, 00:12:59.133 "data_size": 65536 00:12:59.133 } 00:12:59.133 ] 00:12:59.133 }' 00:12:59.133 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.133 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.392 "name": "raid_bdev1", 00:12:59.392 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:12:59.392 "strip_size_kb": 0, 00:12:59.392 "state": "online", 00:12:59.392 "raid_level": "raid1", 00:12:59.392 "superblock": false, 00:12:59.392 "num_base_bdevs": 2, 00:12:59.392 "num_base_bdevs_discovered": 1, 00:12:59.392 "num_base_bdevs_operational": 1, 00:12:59.392 "base_bdevs_list": [ 00:12:59.392 { 00:12:59.392 "name": null, 00:12:59.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.392 "is_configured": false, 00:12:59.392 "data_offset": 0, 00:12:59.392 "data_size": 65536 00:12:59.392 }, 00:12:59.392 { 00:12:59.392 "name": "BaseBdev2", 00:12:59.392 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:12:59.392 "is_configured": true, 00:12:59.392 "data_offset": 0, 00:12:59.392 "data_size": 65536 00:12:59.392 } 00:12:59.392 ] 00:12:59.392 }' 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.392 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.652 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.652 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.652 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.652 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.652 [2024-11-08 16:54:28.963301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.652 [2024-11-08 16:54:28.967922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:12:59.652 16:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.652 16:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:59.652 [2024-11-08 16:54:28.970113] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.594 16:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.594 16:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.594 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.594 "name": "raid_bdev1", 00:13:00.594 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:13:00.594 "strip_size_kb": 0, 00:13:00.594 "state": "online", 00:13:00.594 "raid_level": "raid1", 00:13:00.594 "superblock": false, 00:13:00.594 "num_base_bdevs": 2, 00:13:00.594 "num_base_bdevs_discovered": 2, 00:13:00.594 "num_base_bdevs_operational": 2, 00:13:00.594 "process": { 00:13:00.594 "type": "rebuild", 00:13:00.594 "target": "spare", 00:13:00.594 "progress": { 00:13:00.594 "blocks": 20480, 00:13:00.594 "percent": 31 00:13:00.594 } 00:13:00.594 }, 00:13:00.594 "base_bdevs_list": [ 00:13:00.594 { 00:13:00.595 "name": "spare", 00:13:00.595 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:13:00.595 "is_configured": true, 00:13:00.595 "data_offset": 0, 00:13:00.595 "data_size": 65536 00:13:00.595 }, 00:13:00.595 { 00:13:00.595 "name": "BaseBdev2", 00:13:00.595 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:13:00.595 "is_configured": true, 00:13:00.595 "data_offset": 0, 00:13:00.595 "data_size": 65536 00:13:00.595 } 00:13:00.595 ] 00:13:00.595 }' 00:13:00.595 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.595 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.595 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=295 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.871 "name": "raid_bdev1", 00:13:00.871 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:13:00.871 "strip_size_kb": 0, 00:13:00.871 "state": "online", 00:13:00.871 "raid_level": "raid1", 00:13:00.871 "superblock": false, 00:13:00.871 "num_base_bdevs": 2, 00:13:00.871 "num_base_bdevs_discovered": 2, 00:13:00.871 "num_base_bdevs_operational": 2, 00:13:00.871 "process": { 00:13:00.871 "type": "rebuild", 00:13:00.871 "target": "spare", 00:13:00.871 "progress": { 00:13:00.871 "blocks": 22528, 00:13:00.871 "percent": 34 00:13:00.871 } 00:13:00.871 }, 00:13:00.871 "base_bdevs_list": [ 00:13:00.871 { 00:13:00.871 "name": "spare", 00:13:00.871 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:13:00.871 "is_configured": true, 00:13:00.871 "data_offset": 0, 00:13:00.871 "data_size": 65536 00:13:00.871 }, 00:13:00.871 { 00:13:00.871 "name": "BaseBdev2", 00:13:00.871 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:13:00.871 "is_configured": true, 00:13:00.871 "data_offset": 0, 00:13:00.871 "data_size": 65536 00:13:00.871 } 00:13:00.871 ] 00:13:00.871 }' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.871 16:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.851 "name": "raid_bdev1", 00:13:01.851 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:13:01.851 "strip_size_kb": 0, 00:13:01.851 "state": "online", 00:13:01.851 "raid_level": "raid1", 00:13:01.851 "superblock": false, 00:13:01.851 "num_base_bdevs": 2, 00:13:01.851 "num_base_bdevs_discovered": 2, 00:13:01.851 "num_base_bdevs_operational": 2, 00:13:01.851 "process": { 00:13:01.851 "type": "rebuild", 00:13:01.851 "target": "spare", 00:13:01.851 "progress": { 00:13:01.851 "blocks": 47104, 00:13:01.851 "percent": 71 00:13:01.851 } 00:13:01.851 }, 00:13:01.851 "base_bdevs_list": [ 00:13:01.851 { 00:13:01.851 "name": "spare", 00:13:01.851 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:13:01.851 "is_configured": true, 00:13:01.851 "data_offset": 0, 00:13:01.851 "data_size": 65536 00:13:01.851 }, 00:13:01.851 { 00:13:01.851 "name": "BaseBdev2", 00:13:01.851 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:13:01.851 "is_configured": true, 00:13:01.851 "data_offset": 0, 00:13:01.851 "data_size": 65536 00:13:01.851 } 00:13:01.851 ] 00:13:01.851 }' 00:13:01.851 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.111 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.111 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.111 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.111 16:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.680 [2024-11-08 16:54:32.184995] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:02.680 [2024-11-08 16:54:32.185199] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:02.680 [2024-11-08 16:54:32.185296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.939 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.940 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.940 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.199 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.199 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.199 "name": "raid_bdev1", 00:13:03.199 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:13:03.199 "strip_size_kb": 0, 00:13:03.199 "state": "online", 00:13:03.199 "raid_level": "raid1", 00:13:03.199 "superblock": false, 00:13:03.199 "num_base_bdevs": 2, 00:13:03.199 "num_base_bdevs_discovered": 2, 00:13:03.199 "num_base_bdevs_operational": 2, 00:13:03.199 "base_bdevs_list": [ 00:13:03.199 { 00:13:03.199 "name": "spare", 00:13:03.199 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:13:03.199 "is_configured": true, 00:13:03.199 "data_offset": 0, 00:13:03.199 "data_size": 65536 00:13:03.199 }, 00:13:03.199 { 00:13:03.199 "name": "BaseBdev2", 00:13:03.199 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:13:03.199 "is_configured": true, 00:13:03.199 "data_offset": 0, 00:13:03.199 "data_size": 65536 00:13:03.199 } 00:13:03.199 ] 00:13:03.199 }' 00:13:03.199 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.199 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.200 "name": "raid_bdev1", 00:13:03.200 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:13:03.200 "strip_size_kb": 0, 00:13:03.200 "state": "online", 00:13:03.200 "raid_level": "raid1", 00:13:03.200 "superblock": false, 00:13:03.200 "num_base_bdevs": 2, 00:13:03.200 "num_base_bdevs_discovered": 2, 00:13:03.200 "num_base_bdevs_operational": 2, 00:13:03.200 "base_bdevs_list": [ 00:13:03.200 { 00:13:03.200 "name": "spare", 00:13:03.200 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:13:03.200 "is_configured": true, 00:13:03.200 "data_offset": 0, 00:13:03.200 "data_size": 65536 00:13:03.200 }, 00:13:03.200 { 00:13:03.200 "name": "BaseBdev2", 00:13:03.200 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:13:03.200 "is_configured": true, 00:13:03.200 "data_offset": 0, 00:13:03.200 "data_size": 65536 00:13:03.200 } 00:13:03.200 ] 00:13:03.200 }' 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.200 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.459 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.460 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.460 "name": "raid_bdev1", 00:13:03.460 "uuid": "31f7f160-11bd-4c32-967c-acc7800d4948", 00:13:03.460 "strip_size_kb": 0, 00:13:03.460 "state": "online", 00:13:03.460 "raid_level": "raid1", 00:13:03.460 "superblock": false, 00:13:03.460 "num_base_bdevs": 2, 00:13:03.460 "num_base_bdevs_discovered": 2, 00:13:03.460 "num_base_bdevs_operational": 2, 00:13:03.460 "base_bdevs_list": [ 00:13:03.460 { 00:13:03.460 "name": "spare", 00:13:03.460 "uuid": "ab7f7f12-7e1f-5970-b638-71afa1f05c81", 00:13:03.460 "is_configured": true, 00:13:03.460 "data_offset": 0, 00:13:03.460 "data_size": 65536 00:13:03.460 }, 00:13:03.460 { 00:13:03.460 "name": "BaseBdev2", 00:13:03.460 "uuid": "499778da-e8e6-59b3-8fdb-55045875b076", 00:13:03.460 "is_configured": true, 00:13:03.460 "data_offset": 0, 00:13:03.460 "data_size": 65536 00:13:03.460 } 00:13:03.460 ] 00:13:03.460 }' 00:13:03.460 16:54:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.460 16:54:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.719 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-11-08 16:54:33.156242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.720 [2024-11-08 16:54:33.156282] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.720 [2024-11-08 16:54:33.156400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.720 [2024-11-08 16:54:33.156476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.720 [2024-11-08 16:54:33.156493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:03.720 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:03.979 /dev/nbd0 00:13:03.979 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.979 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.979 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:03.979 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.980 1+0 records in 00:13:03.980 1+0 records out 00:13:03.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252719 s, 16.2 MB/s 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:03.980 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:04.239 /dev/nbd1 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.239 1+0 records in 00:13:04.239 1+0 records out 00:13:04.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310873 s, 13.2 MB/s 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.239 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:04.240 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.240 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.240 16:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:04.240 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.240 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:04.240 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.499 16:54:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.757 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.758 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86036 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86036 ']' 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86036 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86036 00:13:05.017 killing process with pid 86036 00:13:05.017 Received shutdown signal, test time was about 60.000000 seconds 00:13:05.017 00:13:05.017 Latency(us) 00:13:05.017 [2024-11-08T16:54:34.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.017 [2024-11-08T16:54:34.545Z] =================================================================================================================== 00:13:05.017 [2024-11-08T16:54:34.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86036' 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86036 00:13:05.017 [2024-11-08 16:54:34.346472] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.017 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86036 00:13:05.017 [2024-11-08 16:54:34.379451] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:05.277 00:13:05.277 real 0m14.853s 00:13:05.277 user 0m17.233s 00:13:05.277 sys 0m3.202s 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.277 ************************************ 00:13:05.277 END TEST raid_rebuild_test 00:13:05.277 ************************************ 00:13:05.277 16:54:34 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:05.277 16:54:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:05.277 16:54:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.277 16:54:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.277 ************************************ 00:13:05.277 START TEST raid_rebuild_test_sb 00:13:05.277 ************************************ 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86453 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86453 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86453 ']' 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.277 16:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.536 [2024-11-08 16:54:34.807571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:05.536 [2024-11-08 16:54:34.807876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:05.536 Zero copy mechanism will not be used. 00:13:05.536 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86453 ] 00:13:05.536 [2024-11-08 16:54:34.977851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.536 [2024-11-08 16:54:35.030779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.794 [2024-11-08 16:54:35.074870] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.794 [2024-11-08 16:54:35.074996] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.363 BaseBdev1_malloc 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.363 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.363 [2024-11-08 16:54:35.694805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.363 [2024-11-08 16:54:35.694894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.363 [2024-11-08 16:54:35.694923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.363 [2024-11-08 16:54:35.694939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.363 [2024-11-08 16:54:35.697171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.363 [2024-11-08 16:54:35.697213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.363 BaseBdev1 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 BaseBdev2_malloc 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 [2024-11-08 16:54:35.733307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:06.364 [2024-11-08 16:54:35.733393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.364 [2024-11-08 16:54:35.733418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.364 [2024-11-08 16:54:35.733429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.364 [2024-11-08 16:54:35.735889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.364 [2024-11-08 16:54:35.736027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.364 BaseBdev2 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 spare_malloc 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 spare_delay 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 [2024-11-08 16:54:35.774689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.364 [2024-11-08 16:54:35.774886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.364 [2024-11-08 16:54:35.774920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:06.364 [2024-11-08 16:54:35.774932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.364 [2024-11-08 16:54:35.777391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.364 [2024-11-08 16:54:35.777431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.364 spare 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 [2024-11-08 16:54:35.786732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.364 [2024-11-08 16:54:35.788748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.364 [2024-11-08 16:54:35.788996] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:06.364 [2024-11-08 16:54:35.789015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.364 [2024-11-08 16:54:35.789304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:06.364 [2024-11-08 16:54:35.789447] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:06.364 [2024-11-08 16:54:35.789466] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:06.364 [2024-11-08 16:54:35.789602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.364 "name": "raid_bdev1", 00:13:06.364 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:06.364 "strip_size_kb": 0, 00:13:06.364 "state": "online", 00:13:06.364 "raid_level": "raid1", 00:13:06.364 "superblock": true, 00:13:06.364 "num_base_bdevs": 2, 00:13:06.364 "num_base_bdevs_discovered": 2, 00:13:06.364 "num_base_bdevs_operational": 2, 00:13:06.364 "base_bdevs_list": [ 00:13:06.364 { 00:13:06.364 "name": "BaseBdev1", 00:13:06.364 "uuid": "b65ec568-31f6-51d1-b7bb-1e34c7ba64a4", 00:13:06.364 "is_configured": true, 00:13:06.364 "data_offset": 2048, 00:13:06.364 "data_size": 63488 00:13:06.364 }, 00:13:06.364 { 00:13:06.364 "name": "BaseBdev2", 00:13:06.364 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:06.364 "is_configured": true, 00:13:06.364 "data_offset": 2048, 00:13:06.364 "data_size": 63488 00:13:06.364 } 00:13:06.364 ] 00:13:06.364 }' 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.364 16:54:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.933 [2024-11-08 16:54:36.234223] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:06.933 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.934 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:06.934 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.934 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:06.934 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.934 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.934 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:07.194 [2024-11-08 16:54:36.533470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:07.194 /dev/nbd0 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.194 1+0 records in 00:13:07.194 1+0 records out 00:13:07.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308773 s, 13.3 MB/s 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:07.194 16:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:11.386 63488+0 records in 00:13:11.386 63488+0 records out 00:13:11.386 32505856 bytes (33 MB, 31 MiB) copied, 4.21992 s, 7.7 MB/s 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.386 16:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.653 [2024-11-08 16:54:41.088950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.653 [2024-11-08 16:54:41.105696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.653 "name": "raid_bdev1", 00:13:11.653 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:11.653 "strip_size_kb": 0, 00:13:11.653 "state": "online", 00:13:11.653 "raid_level": "raid1", 00:13:11.653 "superblock": true, 00:13:11.653 "num_base_bdevs": 2, 00:13:11.653 "num_base_bdevs_discovered": 1, 00:13:11.653 "num_base_bdevs_operational": 1, 00:13:11.653 "base_bdevs_list": [ 00:13:11.653 { 00:13:11.653 "name": null, 00:13:11.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.653 "is_configured": false, 00:13:11.653 "data_offset": 0, 00:13:11.653 "data_size": 63488 00:13:11.653 }, 00:13:11.653 { 00:13:11.653 "name": "BaseBdev2", 00:13:11.653 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:11.653 "is_configured": true, 00:13:11.653 "data_offset": 2048, 00:13:11.653 "data_size": 63488 00:13:11.653 } 00:13:11.653 ] 00:13:11.653 }' 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.653 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.243 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.243 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.243 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.243 [2024-11-08 16:54:41.556847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.243 [2024-11-08 16:54:41.561275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:13:12.243 16:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.243 16:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:12.243 [2024-11-08 16:54:41.563284] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.184 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.184 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.184 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.184 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.185 "name": "raid_bdev1", 00:13:13.185 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:13.185 "strip_size_kb": 0, 00:13:13.185 "state": "online", 00:13:13.185 "raid_level": "raid1", 00:13:13.185 "superblock": true, 00:13:13.185 "num_base_bdevs": 2, 00:13:13.185 "num_base_bdevs_discovered": 2, 00:13:13.185 "num_base_bdevs_operational": 2, 00:13:13.185 "process": { 00:13:13.185 "type": "rebuild", 00:13:13.185 "target": "spare", 00:13:13.185 "progress": { 00:13:13.185 "blocks": 20480, 00:13:13.185 "percent": 32 00:13:13.185 } 00:13:13.185 }, 00:13:13.185 "base_bdevs_list": [ 00:13:13.185 { 00:13:13.185 "name": "spare", 00:13:13.185 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:13.185 "is_configured": true, 00:13:13.185 "data_offset": 2048, 00:13:13.185 "data_size": 63488 00:13:13.185 }, 00:13:13.185 { 00:13:13.185 "name": "BaseBdev2", 00:13:13.185 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:13.185 "is_configured": true, 00:13:13.185 "data_offset": 2048, 00:13:13.185 "data_size": 63488 00:13:13.185 } 00:13:13.185 ] 00:13:13.185 }' 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.185 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.444 [2024-11-08 16:54:42.736414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.444 [2024-11-08 16:54:42.769641] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.444 [2024-11-08 16:54:42.769737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.444 [2024-11-08 16:54:42.769761] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.444 [2024-11-08 16:54:42.769771] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.444 "name": "raid_bdev1", 00:13:13.444 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:13.444 "strip_size_kb": 0, 00:13:13.444 "state": "online", 00:13:13.444 "raid_level": "raid1", 00:13:13.444 "superblock": true, 00:13:13.444 "num_base_bdevs": 2, 00:13:13.444 "num_base_bdevs_discovered": 1, 00:13:13.444 "num_base_bdevs_operational": 1, 00:13:13.444 "base_bdevs_list": [ 00:13:13.444 { 00:13:13.444 "name": null, 00:13:13.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.444 "is_configured": false, 00:13:13.444 "data_offset": 0, 00:13:13.444 "data_size": 63488 00:13:13.444 }, 00:13:13.444 { 00:13:13.444 "name": "BaseBdev2", 00:13:13.444 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:13.444 "is_configured": true, 00:13:13.444 "data_offset": 2048, 00:13:13.444 "data_size": 63488 00:13:13.444 } 00:13:13.444 ] 00:13:13.444 }' 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.444 16:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.703 16:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.962 "name": "raid_bdev1", 00:13:13.962 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:13.962 "strip_size_kb": 0, 00:13:13.962 "state": "online", 00:13:13.962 "raid_level": "raid1", 00:13:13.962 "superblock": true, 00:13:13.962 "num_base_bdevs": 2, 00:13:13.962 "num_base_bdevs_discovered": 1, 00:13:13.962 "num_base_bdevs_operational": 1, 00:13:13.962 "base_bdevs_list": [ 00:13:13.962 { 00:13:13.962 "name": null, 00:13:13.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.962 "is_configured": false, 00:13:13.962 "data_offset": 0, 00:13:13.962 "data_size": 63488 00:13:13.962 }, 00:13:13.962 { 00:13:13.962 "name": "BaseBdev2", 00:13:13.962 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:13.962 "is_configured": true, 00:13:13.962 "data_offset": 2048, 00:13:13.962 "data_size": 63488 00:13:13.962 } 00:13:13.962 ] 00:13:13.962 }' 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.962 [2024-11-08 16:54:43.365728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.962 [2024-11-08 16:54:43.370254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.962 16:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:13.962 [2024-11-08 16:54:43.372499] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 16:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.911 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.911 "name": "raid_bdev1", 00:13:14.911 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:14.911 "strip_size_kb": 0, 00:13:14.911 "state": "online", 00:13:14.911 "raid_level": "raid1", 00:13:14.911 "superblock": true, 00:13:14.911 "num_base_bdevs": 2, 00:13:14.911 "num_base_bdevs_discovered": 2, 00:13:14.911 "num_base_bdevs_operational": 2, 00:13:14.911 "process": { 00:13:14.911 "type": "rebuild", 00:13:14.911 "target": "spare", 00:13:14.911 "progress": { 00:13:14.911 "blocks": 20480, 00:13:14.911 "percent": 32 00:13:14.911 } 00:13:14.911 }, 00:13:14.911 "base_bdevs_list": [ 00:13:14.911 { 00:13:14.911 "name": "spare", 00:13:14.911 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:14.911 "is_configured": true, 00:13:14.911 "data_offset": 2048, 00:13:14.911 "data_size": 63488 00:13:14.911 }, 00:13:14.911 { 00:13:14.911 "name": "BaseBdev2", 00:13:14.911 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:14.911 "is_configured": true, 00:13:14.911 "data_offset": 2048, 00:13:14.911 "data_size": 63488 00:13:14.911 } 00:13:14.911 ] 00:13:14.911 }' 00:13:14.911 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:15.170 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=309 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.170 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.171 "name": "raid_bdev1", 00:13:15.171 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:15.171 "strip_size_kb": 0, 00:13:15.171 "state": "online", 00:13:15.171 "raid_level": "raid1", 00:13:15.171 "superblock": true, 00:13:15.171 "num_base_bdevs": 2, 00:13:15.171 "num_base_bdevs_discovered": 2, 00:13:15.171 "num_base_bdevs_operational": 2, 00:13:15.171 "process": { 00:13:15.171 "type": "rebuild", 00:13:15.171 "target": "spare", 00:13:15.171 "progress": { 00:13:15.171 "blocks": 22528, 00:13:15.171 "percent": 35 00:13:15.171 } 00:13:15.171 }, 00:13:15.171 "base_bdevs_list": [ 00:13:15.171 { 00:13:15.171 "name": "spare", 00:13:15.171 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:15.171 "is_configured": true, 00:13:15.171 "data_offset": 2048, 00:13:15.171 "data_size": 63488 00:13:15.171 }, 00:13:15.171 { 00:13:15.171 "name": "BaseBdev2", 00:13:15.171 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:15.171 "is_configured": true, 00:13:15.171 "data_offset": 2048, 00:13:15.171 "data_size": 63488 00:13:15.171 } 00:13:15.171 ] 00:13:15.171 }' 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.171 16:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.551 16:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.552 "name": "raid_bdev1", 00:13:16.552 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:16.552 "strip_size_kb": 0, 00:13:16.552 "state": "online", 00:13:16.552 "raid_level": "raid1", 00:13:16.552 "superblock": true, 00:13:16.552 "num_base_bdevs": 2, 00:13:16.552 "num_base_bdevs_discovered": 2, 00:13:16.552 "num_base_bdevs_operational": 2, 00:13:16.552 "process": { 00:13:16.552 "type": "rebuild", 00:13:16.552 "target": "spare", 00:13:16.552 "progress": { 00:13:16.552 "blocks": 47104, 00:13:16.552 "percent": 74 00:13:16.552 } 00:13:16.552 }, 00:13:16.552 "base_bdevs_list": [ 00:13:16.552 { 00:13:16.552 "name": "spare", 00:13:16.552 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:16.552 "is_configured": true, 00:13:16.552 "data_offset": 2048, 00:13:16.552 "data_size": 63488 00:13:16.552 }, 00:13:16.552 { 00:13:16.552 "name": "BaseBdev2", 00:13:16.552 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:16.552 "is_configured": true, 00:13:16.552 "data_offset": 2048, 00:13:16.552 "data_size": 63488 00:13:16.552 } 00:13:16.552 ] 00:13:16.552 }' 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.552 16:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:17.120 [2024-11-08 16:54:46.487113] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:17.120 [2024-11-08 16:54:46.487375] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:17.120 [2024-11-08 16:54:46.487565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.380 "name": "raid_bdev1", 00:13:17.380 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:17.380 "strip_size_kb": 0, 00:13:17.380 "state": "online", 00:13:17.380 "raid_level": "raid1", 00:13:17.380 "superblock": true, 00:13:17.380 "num_base_bdevs": 2, 00:13:17.380 "num_base_bdevs_discovered": 2, 00:13:17.380 "num_base_bdevs_operational": 2, 00:13:17.380 "base_bdevs_list": [ 00:13:17.380 { 00:13:17.380 "name": "spare", 00:13:17.380 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:17.380 "is_configured": true, 00:13:17.380 "data_offset": 2048, 00:13:17.380 "data_size": 63488 00:13:17.380 }, 00:13:17.380 { 00:13:17.380 "name": "BaseBdev2", 00:13:17.380 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:17.380 "is_configured": true, 00:13:17.380 "data_offset": 2048, 00:13:17.380 "data_size": 63488 00:13:17.380 } 00:13:17.380 ] 00:13:17.380 }' 00:13:17.380 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.640 16:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.640 "name": "raid_bdev1", 00:13:17.640 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:17.640 "strip_size_kb": 0, 00:13:17.640 "state": "online", 00:13:17.640 "raid_level": "raid1", 00:13:17.640 "superblock": true, 00:13:17.640 "num_base_bdevs": 2, 00:13:17.640 "num_base_bdevs_discovered": 2, 00:13:17.640 "num_base_bdevs_operational": 2, 00:13:17.640 "base_bdevs_list": [ 00:13:17.640 { 00:13:17.640 "name": "spare", 00:13:17.640 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:17.640 "is_configured": true, 00:13:17.640 "data_offset": 2048, 00:13:17.640 "data_size": 63488 00:13:17.640 }, 00:13:17.640 { 00:13:17.640 "name": "BaseBdev2", 00:13:17.640 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:17.640 "is_configured": true, 00:13:17.640 "data_offset": 2048, 00:13:17.640 "data_size": 63488 00:13:17.640 } 00:13:17.640 ] 00:13:17.640 }' 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.640 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.899 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.899 "name": "raid_bdev1", 00:13:17.899 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:17.899 "strip_size_kb": 0, 00:13:17.899 "state": "online", 00:13:17.899 "raid_level": "raid1", 00:13:17.899 "superblock": true, 00:13:17.899 "num_base_bdevs": 2, 00:13:17.899 "num_base_bdevs_discovered": 2, 00:13:17.899 "num_base_bdevs_operational": 2, 00:13:17.899 "base_bdevs_list": [ 00:13:17.899 { 00:13:17.899 "name": "spare", 00:13:17.899 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:17.899 "is_configured": true, 00:13:17.899 "data_offset": 2048, 00:13:17.899 "data_size": 63488 00:13:17.899 }, 00:13:17.899 { 00:13:17.899 "name": "BaseBdev2", 00:13:17.899 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:17.899 "is_configured": true, 00:13:17.899 "data_offset": 2048, 00:13:17.899 "data_size": 63488 00:13:17.899 } 00:13:17.899 ] 00:13:17.899 }' 00:13:17.899 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.899 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.158 [2024-11-08 16:54:47.583279] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.158 [2024-11-08 16:54:47.583401] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.158 [2024-11-08 16:54:47.583522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.158 [2024-11-08 16:54:47.583601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.158 [2024-11-08 16:54:47.583619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:18.158 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:18.417 /dev/nbd0 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.417 1+0 records in 00:13:18.417 1+0 records out 00:13:18.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257341 s, 15.9 MB/s 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:18.417 16:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:18.677 /dev/nbd1 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.677 1+0 records in 00:13:18.677 1+0 records out 00:13:18.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514364 s, 8.0 MB/s 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:18.677 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.937 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.197 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 [2024-11-08 16:54:48.806580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.456 [2024-11-08 16:54:48.806681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.456 [2024-11-08 16:54:48.806708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:19.456 [2024-11-08 16:54:48.806723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.456 [2024-11-08 16:54:48.809102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.456 [2024-11-08 16:54:48.809147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.456 [2024-11-08 16:54:48.809242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.456 [2024-11-08 16:54:48.809298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.456 [2024-11-08 16:54:48.809419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.456 spare 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 [2024-11-08 16:54:48.909339] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:19.456 [2024-11-08 16:54:48.909393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:19.456 [2024-11-08 16:54:48.909817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:13:19.456 [2024-11-08 16:54:48.910029] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:19.456 [2024-11-08 16:54:48.910053] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:19.456 [2024-11-08 16:54:48.910233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.456 "name": "raid_bdev1", 00:13:19.456 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:19.456 "strip_size_kb": 0, 00:13:19.456 "state": "online", 00:13:19.456 "raid_level": "raid1", 00:13:19.456 "superblock": true, 00:13:19.456 "num_base_bdevs": 2, 00:13:19.456 "num_base_bdevs_discovered": 2, 00:13:19.456 "num_base_bdevs_operational": 2, 00:13:19.456 "base_bdevs_list": [ 00:13:19.456 { 00:13:19.456 "name": "spare", 00:13:19.456 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:19.456 "is_configured": true, 00:13:19.456 "data_offset": 2048, 00:13:19.456 "data_size": 63488 00:13:19.456 }, 00:13:19.456 { 00:13:19.456 "name": "BaseBdev2", 00:13:19.456 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:19.456 "is_configured": true, 00:13:19.456 "data_offset": 2048, 00:13:19.456 "data_size": 63488 00:13:19.456 } 00:13:19.456 ] 00:13:19.456 }' 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.456 16:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.026 "name": "raid_bdev1", 00:13:20.026 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:20.026 "strip_size_kb": 0, 00:13:20.026 "state": "online", 00:13:20.026 "raid_level": "raid1", 00:13:20.026 "superblock": true, 00:13:20.026 "num_base_bdevs": 2, 00:13:20.026 "num_base_bdevs_discovered": 2, 00:13:20.026 "num_base_bdevs_operational": 2, 00:13:20.026 "base_bdevs_list": [ 00:13:20.026 { 00:13:20.026 "name": "spare", 00:13:20.026 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:20.026 "is_configured": true, 00:13:20.026 "data_offset": 2048, 00:13:20.026 "data_size": 63488 00:13:20.026 }, 00:13:20.026 { 00:13:20.026 "name": "BaseBdev2", 00:13:20.026 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:20.026 "is_configured": true, 00:13:20.026 "data_offset": 2048, 00:13:20.026 "data_size": 63488 00:13:20.026 } 00:13:20.026 ] 00:13:20.026 }' 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.026 [2024-11-08 16:54:49.509459] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.026 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.027 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.287 "name": "raid_bdev1", 00:13:20.287 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:20.287 "strip_size_kb": 0, 00:13:20.287 "state": "online", 00:13:20.287 "raid_level": "raid1", 00:13:20.287 "superblock": true, 00:13:20.287 "num_base_bdevs": 2, 00:13:20.287 "num_base_bdevs_discovered": 1, 00:13:20.287 "num_base_bdevs_operational": 1, 00:13:20.287 "base_bdevs_list": [ 00:13:20.287 { 00:13:20.287 "name": null, 00:13:20.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.287 "is_configured": false, 00:13:20.287 "data_offset": 0, 00:13:20.287 "data_size": 63488 00:13:20.287 }, 00:13:20.287 { 00:13:20.287 "name": "BaseBdev2", 00:13:20.287 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:20.287 "is_configured": true, 00:13:20.287 "data_offset": 2048, 00:13:20.287 "data_size": 63488 00:13:20.287 } 00:13:20.287 ] 00:13:20.287 }' 00:13:20.287 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.287 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.547 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.547 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.547 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.547 [2024-11-08 16:54:49.944757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.547 [2024-11-08 16:54:49.944965] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.547 [2024-11-08 16:54:49.945000] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.547 [2024-11-08 16:54:49.945052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.547 [2024-11-08 16:54:49.949246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:13:20.547 16:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.547 16:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:20.547 [2024-11-08 16:54:49.951385] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.486 16:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.486 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.486 "name": "raid_bdev1", 00:13:21.486 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:21.486 "strip_size_kb": 0, 00:13:21.486 "state": "online", 00:13:21.486 "raid_level": "raid1", 00:13:21.486 "superblock": true, 00:13:21.486 "num_base_bdevs": 2, 00:13:21.486 "num_base_bdevs_discovered": 2, 00:13:21.486 "num_base_bdevs_operational": 2, 00:13:21.486 "process": { 00:13:21.486 "type": "rebuild", 00:13:21.486 "target": "spare", 00:13:21.486 "progress": { 00:13:21.486 "blocks": 20480, 00:13:21.486 "percent": 32 00:13:21.486 } 00:13:21.486 }, 00:13:21.486 "base_bdevs_list": [ 00:13:21.486 { 00:13:21.486 "name": "spare", 00:13:21.486 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:21.486 "is_configured": true, 00:13:21.486 "data_offset": 2048, 00:13:21.486 "data_size": 63488 00:13:21.486 }, 00:13:21.486 { 00:13:21.486 "name": "BaseBdev2", 00:13:21.486 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:21.486 "is_configured": true, 00:13:21.486 "data_offset": 2048, 00:13:21.486 "data_size": 63488 00:13:21.486 } 00:13:21.486 ] 00:13:21.486 }' 00:13:21.486 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.746 [2024-11-08 16:54:51.107929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.746 [2024-11-08 16:54:51.156921] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.746 [2024-11-08 16:54:51.157016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.746 [2024-11-08 16:54:51.157053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.746 [2024-11-08 16:54:51.157062] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.746 "name": "raid_bdev1", 00:13:21.746 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:21.746 "strip_size_kb": 0, 00:13:21.746 "state": "online", 00:13:21.746 "raid_level": "raid1", 00:13:21.746 "superblock": true, 00:13:21.746 "num_base_bdevs": 2, 00:13:21.746 "num_base_bdevs_discovered": 1, 00:13:21.746 "num_base_bdevs_operational": 1, 00:13:21.746 "base_bdevs_list": [ 00:13:21.746 { 00:13:21.746 "name": null, 00:13:21.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.746 "is_configured": false, 00:13:21.746 "data_offset": 0, 00:13:21.746 "data_size": 63488 00:13:21.746 }, 00:13:21.746 { 00:13:21.746 "name": "BaseBdev2", 00:13:21.746 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:21.746 "is_configured": true, 00:13:21.746 "data_offset": 2048, 00:13:21.746 "data_size": 63488 00:13:21.746 } 00:13:21.746 ] 00:13:21.746 }' 00:13:21.746 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.747 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.334 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.334 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.334 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.334 [2024-11-08 16:54:51.636819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.334 [2024-11-08 16:54:51.636906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.334 [2024-11-08 16:54:51.636936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:22.334 [2024-11-08 16:54:51.636947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.334 [2024-11-08 16:54:51.637472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.334 [2024-11-08 16:54:51.637503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.334 [2024-11-08 16:54:51.637610] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:22.334 [2024-11-08 16:54:51.637645] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.334 [2024-11-08 16:54:51.637679] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:22.334 [2024-11-08 16:54:51.637711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.334 [2024-11-08 16:54:51.642071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:22.334 spare 00:13:22.334 16:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.334 16:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:22.334 [2024-11-08 16:54:51.644336] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.272 "name": "raid_bdev1", 00:13:23.272 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:23.272 "strip_size_kb": 0, 00:13:23.272 "state": "online", 00:13:23.272 "raid_level": "raid1", 00:13:23.272 "superblock": true, 00:13:23.272 "num_base_bdevs": 2, 00:13:23.272 "num_base_bdevs_discovered": 2, 00:13:23.272 "num_base_bdevs_operational": 2, 00:13:23.272 "process": { 00:13:23.272 "type": "rebuild", 00:13:23.272 "target": "spare", 00:13:23.272 "progress": { 00:13:23.272 "blocks": 20480, 00:13:23.272 "percent": 32 00:13:23.272 } 00:13:23.272 }, 00:13:23.272 "base_bdevs_list": [ 00:13:23.272 { 00:13:23.272 "name": "spare", 00:13:23.272 "uuid": "33b5ec4e-9a5b-5d0c-a2ba-d85906089ad7", 00:13:23.272 "is_configured": true, 00:13:23.272 "data_offset": 2048, 00:13:23.272 "data_size": 63488 00:13:23.272 }, 00:13:23.272 { 00:13:23.272 "name": "BaseBdev2", 00:13:23.272 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:23.272 "is_configured": true, 00:13:23.272 "data_offset": 2048, 00:13:23.272 "data_size": 63488 00:13:23.272 } 00:13:23.272 ] 00:13:23.272 }' 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.272 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.272 [2024-11-08 16:54:52.784868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.531 [2024-11-08 16:54:52.850250] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.531 [2024-11-08 16:54:52.850351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.531 [2024-11-08 16:54:52.850371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.531 [2024-11-08 16:54:52.850384] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.531 "name": "raid_bdev1", 00:13:23.531 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:23.531 "strip_size_kb": 0, 00:13:23.531 "state": "online", 00:13:23.531 "raid_level": "raid1", 00:13:23.531 "superblock": true, 00:13:23.531 "num_base_bdevs": 2, 00:13:23.531 "num_base_bdevs_discovered": 1, 00:13:23.531 "num_base_bdevs_operational": 1, 00:13:23.531 "base_bdevs_list": [ 00:13:23.531 { 00:13:23.531 "name": null, 00:13:23.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.531 "is_configured": false, 00:13:23.531 "data_offset": 0, 00:13:23.531 "data_size": 63488 00:13:23.531 }, 00:13:23.531 { 00:13:23.531 "name": "BaseBdev2", 00:13:23.531 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:23.531 "is_configured": true, 00:13:23.531 "data_offset": 2048, 00:13:23.531 "data_size": 63488 00:13:23.531 } 00:13:23.531 ] 00:13:23.531 }' 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.531 16:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.790 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.049 "name": "raid_bdev1", 00:13:24.049 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:24.049 "strip_size_kb": 0, 00:13:24.049 "state": "online", 00:13:24.049 "raid_level": "raid1", 00:13:24.049 "superblock": true, 00:13:24.049 "num_base_bdevs": 2, 00:13:24.049 "num_base_bdevs_discovered": 1, 00:13:24.049 "num_base_bdevs_operational": 1, 00:13:24.049 "base_bdevs_list": [ 00:13:24.049 { 00:13:24.049 "name": null, 00:13:24.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.049 "is_configured": false, 00:13:24.049 "data_offset": 0, 00:13:24.049 "data_size": 63488 00:13:24.049 }, 00:13:24.049 { 00:13:24.049 "name": "BaseBdev2", 00:13:24.049 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:24.049 "is_configured": true, 00:13:24.049 "data_offset": 2048, 00:13:24.049 "data_size": 63488 00:13:24.049 } 00:13:24.049 ] 00:13:24.049 }' 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.049 [2024-11-08 16:54:53.430187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:24.049 [2024-11-08 16:54:53.430291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.049 [2024-11-08 16:54:53.430318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:24.049 [2024-11-08 16:54:53.430331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.049 [2024-11-08 16:54:53.430823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.049 [2024-11-08 16:54:53.430860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.049 [2024-11-08 16:54:53.430957] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:24.049 [2024-11-08 16:54:53.430980] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:24.049 [2024-11-08 16:54:53.431000] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:24.049 [2024-11-08 16:54:53.431029] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:24.049 BaseBdev1 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.049 16:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.985 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.985 "name": "raid_bdev1", 00:13:24.985 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:24.985 "strip_size_kb": 0, 00:13:24.985 "state": "online", 00:13:24.985 "raid_level": "raid1", 00:13:24.985 "superblock": true, 00:13:24.985 "num_base_bdevs": 2, 00:13:24.985 "num_base_bdevs_discovered": 1, 00:13:24.985 "num_base_bdevs_operational": 1, 00:13:24.985 "base_bdevs_list": [ 00:13:24.985 { 00:13:24.985 "name": null, 00:13:24.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.986 "is_configured": false, 00:13:24.986 "data_offset": 0, 00:13:24.986 "data_size": 63488 00:13:24.986 }, 00:13:24.986 { 00:13:24.986 "name": "BaseBdev2", 00:13:24.986 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:24.986 "is_configured": true, 00:13:24.986 "data_offset": 2048, 00:13:24.986 "data_size": 63488 00:13:24.986 } 00:13:24.986 ] 00:13:24.986 }' 00:13:24.986 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.986 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.553 "name": "raid_bdev1", 00:13:25.553 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:25.553 "strip_size_kb": 0, 00:13:25.553 "state": "online", 00:13:25.553 "raid_level": "raid1", 00:13:25.553 "superblock": true, 00:13:25.553 "num_base_bdevs": 2, 00:13:25.553 "num_base_bdevs_discovered": 1, 00:13:25.553 "num_base_bdevs_operational": 1, 00:13:25.553 "base_bdevs_list": [ 00:13:25.553 { 00:13:25.553 "name": null, 00:13:25.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.553 "is_configured": false, 00:13:25.553 "data_offset": 0, 00:13:25.553 "data_size": 63488 00:13:25.553 }, 00:13:25.553 { 00:13:25.553 "name": "BaseBdev2", 00:13:25.553 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:25.553 "is_configured": true, 00:13:25.553 "data_offset": 2048, 00:13:25.553 "data_size": 63488 00:13:25.553 } 00:13:25.553 ] 00:13:25.553 }' 00:13:25.553 16:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.553 [2024-11-08 16:54:55.071699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.553 [2024-11-08 16:54:55.071902] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:25.553 [2024-11-08 16:54:55.071923] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:25.553 request: 00:13:25.553 { 00:13:25.553 "base_bdev": "BaseBdev1", 00:13:25.553 "raid_bdev": "raid_bdev1", 00:13:25.553 "method": "bdev_raid_add_base_bdev", 00:13:25.553 "req_id": 1 00:13:25.553 } 00:13:25.553 Got JSON-RPC error response 00:13:25.553 response: 00:13:25.553 { 00:13:25.553 "code": -22, 00:13:25.553 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:25.553 } 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.553 16:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.935 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.936 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.936 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.936 "name": "raid_bdev1", 00:13:26.936 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:26.936 "strip_size_kb": 0, 00:13:26.936 "state": "online", 00:13:26.936 "raid_level": "raid1", 00:13:26.936 "superblock": true, 00:13:26.936 "num_base_bdevs": 2, 00:13:26.936 "num_base_bdevs_discovered": 1, 00:13:26.936 "num_base_bdevs_operational": 1, 00:13:26.936 "base_bdevs_list": [ 00:13:26.936 { 00:13:26.936 "name": null, 00:13:26.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.936 "is_configured": false, 00:13:26.936 "data_offset": 0, 00:13:26.936 "data_size": 63488 00:13:26.936 }, 00:13:26.936 { 00:13:26.936 "name": "BaseBdev2", 00:13:26.936 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:26.936 "is_configured": true, 00:13:26.936 "data_offset": 2048, 00:13:26.936 "data_size": 63488 00:13:26.936 } 00:13:26.936 ] 00:13:26.936 }' 00:13:26.936 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.936 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.195 "name": "raid_bdev1", 00:13:27.195 "uuid": "a84ae49d-97fa-4955-8afd-27795f2c8ea8", 00:13:27.195 "strip_size_kb": 0, 00:13:27.195 "state": "online", 00:13:27.195 "raid_level": "raid1", 00:13:27.195 "superblock": true, 00:13:27.195 "num_base_bdevs": 2, 00:13:27.195 "num_base_bdevs_discovered": 1, 00:13:27.195 "num_base_bdevs_operational": 1, 00:13:27.195 "base_bdevs_list": [ 00:13:27.195 { 00:13:27.195 "name": null, 00:13:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.195 "is_configured": false, 00:13:27.195 "data_offset": 0, 00:13:27.195 "data_size": 63488 00:13:27.195 }, 00:13:27.195 { 00:13:27.195 "name": "BaseBdev2", 00:13:27.195 "uuid": "d4839430-13d4-5622-9c0d-3e8fe5f2228a", 00:13:27.195 "is_configured": true, 00:13:27.195 "data_offset": 2048, 00:13:27.195 "data_size": 63488 00:13:27.195 } 00:13:27.195 ] 00:13:27.195 }' 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86453 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86453 ']' 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86453 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.195 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86453 00:13:27.454 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.454 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.454 killing process with pid 86453 00:13:27.454 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86453' 00:13:27.454 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86453 00:13:27.454 Received shutdown signal, test time was about 60.000000 seconds 00:13:27.454 00:13:27.454 Latency(us) 00:13:27.454 [2024-11-08T16:54:56.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.454 [2024-11-08T16:54:56.982Z] =================================================================================================================== 00:13:27.454 [2024-11-08T16:54:56.982Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:27.454 16:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86453 00:13:27.454 [2024-11-08 16:54:56.726722] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.454 [2024-11-08 16:54:56.726909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.454 [2024-11-08 16:54:56.726991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.454 [2024-11-08 16:54:56.727004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:27.454 [2024-11-08 16:54:56.761222] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:27.714 00:13:27.714 real 0m22.316s 00:13:27.714 user 0m27.701s 00:13:27.714 sys 0m3.613s 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.714 ************************************ 00:13:27.714 END TEST raid_rebuild_test_sb 00:13:27.714 ************************************ 00:13:27.714 16:54:57 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:27.714 16:54:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:27.714 16:54:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.714 16:54:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.714 ************************************ 00:13:27.714 START TEST raid_rebuild_test_io 00:13:27.714 ************************************ 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87169 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87169 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87169 ']' 00:13:27.714 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.715 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.715 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.715 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.715 16:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.715 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:27.715 Zero copy mechanism will not be used. 00:13:27.715 [2024-11-08 16:54:57.189537] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:27.715 [2024-11-08 16:54:57.189775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87169 ] 00:13:27.974 [2024-11-08 16:54:57.376465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.974 [2024-11-08 16:54:57.430742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.974 [2024-11-08 16:54:57.477818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.974 [2024-11-08 16:54:57.477892] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.910 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 BaseBdev1_malloc 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 [2024-11-08 16:54:58.123592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.911 [2024-11-08 16:54:58.123720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.911 [2024-11-08 16:54:58.123756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:28.911 [2024-11-08 16:54:58.123773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.911 [2024-11-08 16:54:58.126406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.911 [2024-11-08 16:54:58.126459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.911 BaseBdev1 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 BaseBdev2_malloc 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 [2024-11-08 16:54:58.168751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:28.911 [2024-11-08 16:54:58.168842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.911 [2024-11-08 16:54:58.168873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:28.911 [2024-11-08 16:54:58.168886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.911 [2024-11-08 16:54:58.171861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.911 [2024-11-08 16:54:58.171927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.911 BaseBdev2 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 spare_malloc 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 spare_delay 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 [2024-11-08 16:54:58.210555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.911 [2024-11-08 16:54:58.210657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.911 [2024-11-08 16:54:58.210690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:28.911 [2024-11-08 16:54:58.210701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.911 [2024-11-08 16:54:58.213428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.911 [2024-11-08 16:54:58.213480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.911 spare 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 [2024-11-08 16:54:58.222573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.911 [2024-11-08 16:54:58.224921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.911 [2024-11-08 16:54:58.225053] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:28.911 [2024-11-08 16:54:58.225073] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:28.911 [2024-11-08 16:54:58.225425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:28.911 [2024-11-08 16:54:58.225589] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:28.911 [2024-11-08 16:54:58.225611] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:28.911 [2024-11-08 16:54:58.225810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.911 "name": "raid_bdev1", 00:13:28.911 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:28.911 "strip_size_kb": 0, 00:13:28.911 "state": "online", 00:13:28.911 "raid_level": "raid1", 00:13:28.911 "superblock": false, 00:13:28.911 "num_base_bdevs": 2, 00:13:28.911 "num_base_bdevs_discovered": 2, 00:13:28.911 "num_base_bdevs_operational": 2, 00:13:28.911 "base_bdevs_list": [ 00:13:28.911 { 00:13:28.911 "name": "BaseBdev1", 00:13:28.911 "uuid": "9820490f-a9f8-52cd-92d9-d21d286a5dde", 00:13:28.911 "is_configured": true, 00:13:28.911 "data_offset": 0, 00:13:28.911 "data_size": 65536 00:13:28.911 }, 00:13:28.911 { 00:13:28.911 "name": "BaseBdev2", 00:13:28.911 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:28.911 "is_configured": true, 00:13:28.911 "data_offset": 0, 00:13:28.911 "data_size": 65536 00:13:28.911 } 00:13:28.911 ] 00:13:28.911 }' 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.911 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.183 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.183 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.184 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.184 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:29.184 [2024-11-08 16:54:58.678174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.184 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.445 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.446 [2024-11-08 16:54:58.777701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.446 "name": "raid_bdev1", 00:13:29.446 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:29.446 "strip_size_kb": 0, 00:13:29.446 "state": "online", 00:13:29.446 "raid_level": "raid1", 00:13:29.446 "superblock": false, 00:13:29.446 "num_base_bdevs": 2, 00:13:29.446 "num_base_bdevs_discovered": 1, 00:13:29.446 "num_base_bdevs_operational": 1, 00:13:29.446 "base_bdevs_list": [ 00:13:29.446 { 00:13:29.446 "name": null, 00:13:29.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.446 "is_configured": false, 00:13:29.446 "data_offset": 0, 00:13:29.446 "data_size": 65536 00:13:29.446 }, 00:13:29.446 { 00:13:29.446 "name": "BaseBdev2", 00:13:29.446 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:29.446 "is_configured": true, 00:13:29.446 "data_offset": 0, 00:13:29.446 "data_size": 65536 00:13:29.446 } 00:13:29.446 ] 00:13:29.446 }' 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.446 16:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.446 [2024-11-08 16:54:58.891692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:29.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:29.446 Zero copy mechanism will not be used. 00:13:29.446 Running I/O for 60 seconds... 00:13:29.705 16:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.705 16:54:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.705 16:54:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.705 [2024-11-08 16:54:59.228910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.964 16:54:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.965 16:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:29.965 [2024-11-08 16:54:59.287473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:29.965 [2024-11-08 16:54:59.289934] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.965 [2024-11-08 16:54:59.400870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.965 [2024-11-08 16:54:59.401476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.224 [2024-11-08 16:54:59.605139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.224 [2024-11-08 16:54:59.605499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.742 131.00 IOPS, 393.00 MiB/s [2024-11-08T16:55:00.270Z] 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.742 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.742 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.742 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.742 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.001 "name": "raid_bdev1", 00:13:31.001 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:31.001 "strip_size_kb": 0, 00:13:31.001 "state": "online", 00:13:31.001 "raid_level": "raid1", 00:13:31.001 "superblock": false, 00:13:31.001 "num_base_bdevs": 2, 00:13:31.001 "num_base_bdevs_discovered": 2, 00:13:31.001 "num_base_bdevs_operational": 2, 00:13:31.001 "process": { 00:13:31.001 "type": "rebuild", 00:13:31.001 "target": "spare", 00:13:31.001 "progress": { 00:13:31.001 "blocks": 12288, 00:13:31.001 "percent": 18 00:13:31.001 } 00:13:31.001 }, 00:13:31.001 "base_bdevs_list": [ 00:13:31.001 { 00:13:31.001 "name": "spare", 00:13:31.001 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:31.001 "is_configured": true, 00:13:31.001 "data_offset": 0, 00:13:31.001 "data_size": 65536 00:13:31.001 }, 00:13:31.001 { 00:13:31.001 "name": "BaseBdev2", 00:13:31.001 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:31.001 "is_configured": true, 00:13:31.001 "data_offset": 0, 00:13:31.001 "data_size": 65536 00:13:31.001 } 00:13:31.001 ] 00:13:31.001 }' 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.001 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.001 [2024-11-08 16:55:00.436083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.001 [2024-11-08 16:55:00.471730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:31.259 [2024-11-08 16:55:00.578714] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:31.259 [2024-11-08 16:55:00.596048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.259 [2024-11-08 16:55:00.596173] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.259 [2024-11-08 16:55:00.596198] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:31.259 [2024-11-08 16:55:00.617571] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.259 "name": "raid_bdev1", 00:13:31.259 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:31.259 "strip_size_kb": 0, 00:13:31.259 "state": "online", 00:13:31.259 "raid_level": "raid1", 00:13:31.259 "superblock": false, 00:13:31.259 "num_base_bdevs": 2, 00:13:31.259 "num_base_bdevs_discovered": 1, 00:13:31.259 "num_base_bdevs_operational": 1, 00:13:31.259 "base_bdevs_list": [ 00:13:31.259 { 00:13:31.259 "name": null, 00:13:31.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.259 "is_configured": false, 00:13:31.259 "data_offset": 0, 00:13:31.259 "data_size": 65536 00:13:31.259 }, 00:13:31.259 { 00:13:31.259 "name": "BaseBdev2", 00:13:31.259 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:31.259 "is_configured": true, 00:13:31.259 "data_offset": 0, 00:13:31.259 "data_size": 65536 00:13:31.259 } 00:13:31.259 ] 00:13:31.259 }' 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.259 16:55:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 129.00 IOPS, 387.00 MiB/s [2024-11-08T16:55:01.303Z] 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.775 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.775 "name": "raid_bdev1", 00:13:31.775 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:31.775 "strip_size_kb": 0, 00:13:31.775 "state": "online", 00:13:31.775 "raid_level": "raid1", 00:13:31.775 "superblock": false, 00:13:31.775 "num_base_bdevs": 2, 00:13:31.775 "num_base_bdevs_discovered": 1, 00:13:31.775 "num_base_bdevs_operational": 1, 00:13:31.775 "base_bdevs_list": [ 00:13:31.775 { 00:13:31.775 "name": null, 00:13:31.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.775 "is_configured": false, 00:13:31.775 "data_offset": 0, 00:13:31.775 "data_size": 65536 00:13:31.775 }, 00:13:31.775 { 00:13:31.775 "name": "BaseBdev2", 00:13:31.775 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:31.775 "is_configured": true, 00:13:31.775 "data_offset": 0, 00:13:31.775 "data_size": 65536 00:13:31.775 } 00:13:31.776 ] 00:13:31.776 }' 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.776 16:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 [2024-11-08 16:55:01.245987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.034 16:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.034 16:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:32.034 [2024-11-08 16:55:01.317684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:32.034 [2024-11-08 16:55:01.319907] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.034 [2024-11-08 16:55:01.436338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.034 [2024-11-08 16:55:01.436929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.034 [2024-11-08 16:55:01.554185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.034 [2024-11-08 16:55:01.554530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.599 [2024-11-08 16:55:01.888612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:32.599 [2024-11-08 16:55:01.889181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:32.599 135.67 IOPS, 407.00 MiB/s [2024-11-08T16:55:02.127Z] [2024-11-08 16:55:02.098548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.599 [2024-11-08 16:55:02.098920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.857 "name": "raid_bdev1", 00:13:32.857 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:32.857 "strip_size_kb": 0, 00:13:32.857 "state": "online", 00:13:32.857 "raid_level": "raid1", 00:13:32.857 "superblock": false, 00:13:32.857 "num_base_bdevs": 2, 00:13:32.857 "num_base_bdevs_discovered": 2, 00:13:32.857 "num_base_bdevs_operational": 2, 00:13:32.857 "process": { 00:13:32.857 "type": "rebuild", 00:13:32.857 "target": "spare", 00:13:32.857 "progress": { 00:13:32.857 "blocks": 12288, 00:13:32.857 "percent": 18 00:13:32.857 } 00:13:32.857 }, 00:13:32.857 "base_bdevs_list": [ 00:13:32.857 { 00:13:32.857 "name": "spare", 00:13:32.857 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:32.857 "is_configured": true, 00:13:32.857 "data_offset": 0, 00:13:32.857 "data_size": 65536 00:13:32.857 }, 00:13:32.857 { 00:13:32.857 "name": "BaseBdev2", 00:13:32.857 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:32.857 "is_configured": true, 00:13:32.857 "data_offset": 0, 00:13:32.857 "data_size": 65536 00:13:32.857 } 00:13:32.857 ] 00:13:32.857 }' 00:13:32.857 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:33.116 [2024-11-08 16:55:02.434035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 1 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:33.116 2288 offset_end: 18432 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=327 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.116 [2024-11-08 16:55:02.434583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.116 "name": "raid_bdev1", 00:13:33.116 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:33.116 "strip_size_kb": 0, 00:13:33.116 "state": "online", 00:13:33.116 "raid_level": "raid1", 00:13:33.116 "superblock": false, 00:13:33.116 "num_base_bdevs": 2, 00:13:33.116 "num_base_bdevs_discovered": 2, 00:13:33.116 "num_base_bdevs_operational": 2, 00:13:33.116 "process": { 00:13:33.116 "type": "rebuild", 00:13:33.116 "target": "spare", 00:13:33.116 "progress": { 00:13:33.116 "blocks": 14336, 00:13:33.116 "percent": 21 00:13:33.116 } 00:13:33.116 }, 00:13:33.116 "base_bdevs_list": [ 00:13:33.116 { 00:13:33.116 "name": "spare", 00:13:33.116 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:33.116 "is_configured": true, 00:13:33.116 "data_offset": 0, 00:13:33.116 "data_size": 65536 00:13:33.116 }, 00:13:33.116 { 00:13:33.116 "name": "BaseBdev2", 00:13:33.116 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:33.116 "is_configured": true, 00:13:33.116 "data_offset": 0, 00:13:33.116 "data_size": 65536 00:13:33.116 } 00:13:33.116 ] 00:13:33.116 }' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.116 16:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.374 [2024-11-08 16:55:02.642881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:33.374 [2024-11-08 16:55:02.643240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:33.632 111.00 IOPS, 333.00 MiB/s [2024-11-08T16:55:03.160Z] [2024-11-08 16:55:03.007262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:33.632 [2024-11-08 16:55:03.135467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.198 "name": "raid_bdev1", 00:13:34.198 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:34.198 "strip_size_kb": 0, 00:13:34.198 "state": "online", 00:13:34.198 "raid_level": "raid1", 00:13:34.198 "superblock": false, 00:13:34.198 "num_base_bdevs": 2, 00:13:34.198 "num_base_bdevs_discovered": 2, 00:13:34.198 "num_base_bdevs_operational": 2, 00:13:34.198 "process": { 00:13:34.198 "type": "rebuild", 00:13:34.198 "target": "spare", 00:13:34.198 "progress": { 00:13:34.198 "blocks": 28672, 00:13:34.198 "percent": 43 00:13:34.198 } 00:13:34.198 }, 00:13:34.198 "base_bdevs_list": [ 00:13:34.198 { 00:13:34.198 "name": "spare", 00:13:34.198 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:34.198 "is_configured": true, 00:13:34.198 "data_offset": 0, 00:13:34.198 "data_size": 65536 00:13:34.198 }, 00:13:34.198 { 00:13:34.198 "name": "BaseBdev2", 00:13:34.198 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:34.198 "is_configured": true, 00:13:34.198 "data_offset": 0, 00:13:34.198 "data_size": 65536 00:13:34.198 } 00:13:34.198 ] 00:13:34.198 }' 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.198 16:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.714 101.00 IOPS, 303.00 MiB/s [2024-11-08T16:55:04.242Z] [2024-11-08 16:55:04.168918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:35.285 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.285 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.286 "name": "raid_bdev1", 00:13:35.286 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:35.286 "strip_size_kb": 0, 00:13:35.286 "state": "online", 00:13:35.286 "raid_level": "raid1", 00:13:35.286 "superblock": false, 00:13:35.286 "num_base_bdevs": 2, 00:13:35.286 "num_base_bdevs_discovered": 2, 00:13:35.286 "num_base_bdevs_operational": 2, 00:13:35.286 "process": { 00:13:35.286 "type": "rebuild", 00:13:35.286 "target": "spare", 00:13:35.286 "progress": { 00:13:35.286 "blocks": 47104, 00:13:35.286 "percent": 71 00:13:35.286 } 00:13:35.286 }, 00:13:35.286 "base_bdevs_list": [ 00:13:35.286 { 00:13:35.286 "name": "spare", 00:13:35.286 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:35.286 "is_configured": true, 00:13:35.286 "data_offset": 0, 00:13:35.286 "data_size": 65536 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "name": "BaseBdev2", 00:13:35.286 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:35.286 "is_configured": true, 00:13:35.286 "data_offset": 0, 00:13:35.286 "data_size": 65536 00:13:35.286 } 00:13:35.286 ] 00:13:35.286 }' 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.286 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.545 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.545 16:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.805 90.83 IOPS, 272.50 MiB/s [2024-11-08T16:55:05.333Z] [2024-11-08 16:55:05.188244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:36.063 [2024-11-08 16:55:05.420286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:36.631 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.631 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.632 [2024-11-08 16:55:05.873923] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.632 "name": "raid_bdev1", 00:13:36.632 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:36.632 "strip_size_kb": 0, 00:13:36.632 "state": "online", 00:13:36.632 "raid_level": "raid1", 00:13:36.632 "superblock": false, 00:13:36.632 "num_base_bdevs": 2, 00:13:36.632 "num_base_bdevs_discovered": 2, 00:13:36.632 "num_base_bdevs_operational": 2, 00:13:36.632 "process": { 00:13:36.632 "type": "rebuild", 00:13:36.632 "target": "spare", 00:13:36.632 "progress": { 00:13:36.632 "blocks": 63488, 00:13:36.632 "percent": 96 00:13:36.632 } 00:13:36.632 }, 00:13:36.632 "base_bdevs_list": [ 00:13:36.632 { 00:13:36.632 "name": "spare", 00:13:36.632 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:36.632 "is_configured": true, 00:13:36.632 "data_offset": 0, 00:13:36.632 "data_size": 65536 00:13:36.632 }, 00:13:36.632 { 00:13:36.632 "name": "BaseBdev2", 00:13:36.632 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:36.632 "is_configured": true, 00:13:36.632 "data_offset": 0, 00:13:36.632 "data_size": 65536 00:13:36.632 } 00:13:36.632 ] 00:13:36.632 }' 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.632 82.86 IOPS, 248.57 MiB/s [2024-11-08T16:55:06.160Z] 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.632 [2024-11-08 16:55:05.981276] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:36.632 [2024-11-08 16:55:05.984024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.632 16:55:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.570 75.88 IOPS, 227.62 MiB/s [2024-11-08T16:55:07.098Z] 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.570 16:55:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.570 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.570 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.570 "name": "raid_bdev1", 00:13:37.570 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:37.570 "strip_size_kb": 0, 00:13:37.570 "state": "online", 00:13:37.570 "raid_level": "raid1", 00:13:37.570 "superblock": false, 00:13:37.570 "num_base_bdevs": 2, 00:13:37.570 "num_base_bdevs_discovered": 2, 00:13:37.570 "num_base_bdevs_operational": 2, 00:13:37.570 "base_bdevs_list": [ 00:13:37.570 { 00:13:37.570 "name": "spare", 00:13:37.570 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:37.570 "is_configured": true, 00:13:37.570 "data_offset": 0, 00:13:37.570 "data_size": 65536 00:13:37.570 }, 00:13:37.570 { 00:13:37.570 "name": "BaseBdev2", 00:13:37.570 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:37.570 "is_configured": true, 00:13:37.570 "data_offset": 0, 00:13:37.570 "data_size": 65536 00:13:37.570 } 00:13:37.570 ] 00:13:37.570 }' 00:13:37.570 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.570 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:37.570 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.830 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:37.830 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:37.830 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.830 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.830 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.830 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.831 "name": "raid_bdev1", 00:13:37.831 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:37.831 "strip_size_kb": 0, 00:13:37.831 "state": "online", 00:13:37.831 "raid_level": "raid1", 00:13:37.831 "superblock": false, 00:13:37.831 "num_base_bdevs": 2, 00:13:37.831 "num_base_bdevs_discovered": 2, 00:13:37.831 "num_base_bdevs_operational": 2, 00:13:37.831 "base_bdevs_list": [ 00:13:37.831 { 00:13:37.831 "name": "spare", 00:13:37.831 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:37.831 "is_configured": true, 00:13:37.831 "data_offset": 0, 00:13:37.831 "data_size": 65536 00:13:37.831 }, 00:13:37.831 { 00:13:37.831 "name": "BaseBdev2", 00:13:37.831 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:37.831 "is_configured": true, 00:13:37.831 "data_offset": 0, 00:13:37.831 "data_size": 65536 00:13:37.831 } 00:13:37.831 ] 00:13:37.831 }' 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.831 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.091 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.091 "name": "raid_bdev1", 00:13:38.091 "uuid": "a8c6cfe2-ab85-4820-80ae-3bdf8348aa90", 00:13:38.091 "strip_size_kb": 0, 00:13:38.091 "state": "online", 00:13:38.091 "raid_level": "raid1", 00:13:38.091 "superblock": false, 00:13:38.091 "num_base_bdevs": 2, 00:13:38.091 "num_base_bdevs_discovered": 2, 00:13:38.091 "num_base_bdevs_operational": 2, 00:13:38.091 "base_bdevs_list": [ 00:13:38.091 { 00:13:38.091 "name": "spare", 00:13:38.091 "uuid": "65c07a57-70cc-57c8-b9f5-8b8b678681fb", 00:13:38.091 "is_configured": true, 00:13:38.091 "data_offset": 0, 00:13:38.091 "data_size": 65536 00:13:38.091 }, 00:13:38.091 { 00:13:38.091 "name": "BaseBdev2", 00:13:38.091 "uuid": "30065126-3ccc-55ff-942a-1ed11213baa8", 00:13:38.091 "is_configured": true, 00:13:38.091 "data_offset": 0, 00:13:38.091 "data_size": 65536 00:13:38.091 } 00:13:38.091 ] 00:13:38.091 }' 00:13:38.091 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.091 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.350 [2024-11-08 16:55:07.802218] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.350 [2024-11-08 16:55:07.802283] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.350 00:13:38.350 Latency(us) 00:13:38.350 [2024-11-08T16:55:07.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.350 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:38.350 raid_bdev1 : 8.96 71.23 213.69 0.00 0.00 20049.45 359.52 118136.51 00:13:38.350 [2024-11-08T16:55:07.878Z] =================================================================================================================== 00:13:38.350 [2024-11-08T16:55:07.878Z] Total : 71.23 213.69 0.00 0.00 20049.45 359.52 118136.51 00:13:38.350 [2024-11-08 16:55:07.838856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.350 [2024-11-08 16:55:07.838925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.350 [2024-11-08 16:55:07.839025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.350 [2024-11-08 16:55:07.839046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:38.350 { 00:13:38.350 "results": [ 00:13:38.350 { 00:13:38.350 "job": "raid_bdev1", 00:13:38.350 "core_mask": "0x1", 00:13:38.350 "workload": "randrw", 00:13:38.350 "percentage": 50, 00:13:38.350 "status": "finished", 00:13:38.350 "queue_depth": 2, 00:13:38.350 "io_size": 3145728, 00:13:38.350 "runtime": 8.957089, 00:13:38.350 "iops": 71.2284984552459, 00:13:38.350 "mibps": 213.6854953657377, 00:13:38.350 "io_failed": 0, 00:13:38.350 "io_timeout": 0, 00:13:38.350 "avg_latency_us": 20049.453494134235, 00:13:38.350 "min_latency_us": 359.517903930131, 00:13:38.350 "max_latency_us": 118136.51004366812 00:13:38.350 } 00:13:38.350 ], 00:13:38.350 "core_count": 1 00:13:38.350 } 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:38.350 16:55:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.609 16:55:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:38.867 /dev/nbd0 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.867 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.868 1+0 records in 00:13:38.868 1+0 records out 00:13:38.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335214 s, 12.2 MB/s 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.868 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:39.127 /dev/nbd1 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.127 1+0 records in 00:13:39.127 1+0 records out 00:13:39.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296516 s, 13.8 MB/s 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.127 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.386 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:39.387 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.387 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:39.387 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.387 16:55:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87169 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87169 ']' 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87169 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87169 00:13:39.955 killing process with pid 87169 00:13:39.955 Received shutdown signal, test time was about 10.363896 seconds 00:13:39.955 00:13:39.955 Latency(us) 00:13:39.955 [2024-11-08T16:55:09.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.955 [2024-11-08T16:55:09.483Z] =================================================================================================================== 00:13:39.955 [2024-11-08T16:55:09.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87169' 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87169 00:13:39.955 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87169 00:13:39.955 [2024-11-08 16:55:09.238130] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.955 [2024-11-08 16:55:09.267104] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.216 ************************************ 00:13:40.216 END TEST raid_rebuild_test_io 00:13:40.216 ************************************ 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:40.216 00:13:40.216 real 0m12.446s 00:13:40.216 user 0m16.077s 00:13:40.216 sys 0m1.474s 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.216 16:55:09 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:40.216 16:55:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:40.216 16:55:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.216 16:55:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.216 ************************************ 00:13:40.216 START TEST raid_rebuild_test_sb_io 00:13:40.216 ************************************ 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87564 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87564 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87564 ']' 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.216 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.216 Zero copy mechanism will not be used. 00:13:40.216 [2024-11-08 16:55:09.665234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:40.216 [2024-11-08 16:55:09.665403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87564 ] 00:13:40.475 [2024-11-08 16:55:09.831677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.475 [2024-11-08 16:55:09.886691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.475 [2024-11-08 16:55:09.932512] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.475 [2024-11-08 16:55:09.932567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.475 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.475 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:40.475 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.475 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:40.475 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.475 16:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.734 BaseBdev1_malloc 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.734 [2024-11-08 16:55:10.014506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:40.734 [2024-11-08 16:55:10.014595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.734 [2024-11-08 16:55:10.014648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:40.734 [2024-11-08 16:55:10.014669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.734 [2024-11-08 16:55:10.017383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.734 [2024-11-08 16:55:10.017442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:40.734 BaseBdev1 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.734 BaseBdev2_malloc 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.734 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.734 [2024-11-08 16:55:10.046576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:40.735 [2024-11-08 16:55:10.046676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.735 [2024-11-08 16:55:10.046712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:40.735 [2024-11-08 16:55:10.046726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.735 [2024-11-08 16:55:10.049560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.735 [2024-11-08 16:55:10.049617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:40.735 BaseBdev2 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.735 spare_malloc 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.735 spare_delay 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.735 [2024-11-08 16:55:10.076617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:40.735 [2024-11-08 16:55:10.076710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.735 [2024-11-08 16:55:10.076742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:40.735 [2024-11-08 16:55:10.076754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.735 [2024-11-08 16:55:10.079494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.735 [2024-11-08 16:55:10.079548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:40.735 spare 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.735 [2024-11-08 16:55:10.084662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.735 [2024-11-08 16:55:10.086995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.735 [2024-11-08 16:55:10.087231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:40.735 [2024-11-08 16:55:10.087269] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:40.735 [2024-11-08 16:55:10.087687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:40.735 [2024-11-08 16:55:10.087886] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:40.735 [2024-11-08 16:55:10.087911] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:40.735 [2024-11-08 16:55:10.088119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.735 "name": "raid_bdev1", 00:13:40.735 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:40.735 "strip_size_kb": 0, 00:13:40.735 "state": "online", 00:13:40.735 "raid_level": "raid1", 00:13:40.735 "superblock": true, 00:13:40.735 "num_base_bdevs": 2, 00:13:40.735 "num_base_bdevs_discovered": 2, 00:13:40.735 "num_base_bdevs_operational": 2, 00:13:40.735 "base_bdevs_list": [ 00:13:40.735 { 00:13:40.735 "name": "BaseBdev1", 00:13:40.735 "uuid": "37168f24-a820-597e-9c2d-cc77c389c37b", 00:13:40.735 "is_configured": true, 00:13:40.735 "data_offset": 2048, 00:13:40.735 "data_size": 63488 00:13:40.735 }, 00:13:40.735 { 00:13:40.735 "name": "BaseBdev2", 00:13:40.735 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:40.735 "is_configured": true, 00:13:40.735 "data_offset": 2048, 00:13:40.735 "data_size": 63488 00:13:40.735 } 00:13:40.735 ] 00:13:40.735 }' 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.735 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:41.303 [2024-11-08 16:55:10.560181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 [2024-11-08 16:55:10.659791] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.303 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.304 "name": "raid_bdev1", 00:13:41.304 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:41.304 "strip_size_kb": 0, 00:13:41.304 "state": "online", 00:13:41.304 "raid_level": "raid1", 00:13:41.304 "superblock": true, 00:13:41.304 "num_base_bdevs": 2, 00:13:41.304 "num_base_bdevs_discovered": 1, 00:13:41.304 "num_base_bdevs_operational": 1, 00:13:41.304 "base_bdevs_list": [ 00:13:41.304 { 00:13:41.304 "name": null, 00:13:41.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.304 "is_configured": false, 00:13:41.304 "data_offset": 0, 00:13:41.304 "data_size": 63488 00:13:41.304 }, 00:13:41.304 { 00:13:41.304 "name": "BaseBdev2", 00:13:41.304 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:41.304 "is_configured": true, 00:13:41.304 "data_offset": 2048, 00:13:41.304 "data_size": 63488 00:13:41.304 } 00:13:41.304 ] 00:13:41.304 }' 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.304 16:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 [2024-11-08 16:55:10.781841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:41.304 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.304 Zero copy mechanism will not be used. 00:13:41.304 Running I/O for 60 seconds... 00:13:41.873 16:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:41.873 16:55:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.873 16:55:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 [2024-11-08 16:55:11.158921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.873 16:55:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.873 16:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:41.873 [2024-11-08 16:55:11.209050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:41.873 [2024-11-08 16:55:11.211541] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.873 [2024-11-08 16:55:11.338317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:41.873 [2024-11-08 16:55:11.339062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.132 [2024-11-08 16:55:11.567673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.132 [2024-11-08 16:55:11.568025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.391 122.00 IOPS, 366.00 MiB/s [2024-11-08T16:55:11.919Z] [2024-11-08 16:55:11.906969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:42.657 [2024-11-08 16:55:12.026180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.657 [2024-11-08 16:55:12.026618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.917 "name": "raid_bdev1", 00:13:42.917 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:42.917 "strip_size_kb": 0, 00:13:42.917 "state": "online", 00:13:42.917 "raid_level": "raid1", 00:13:42.917 "superblock": true, 00:13:42.917 "num_base_bdevs": 2, 00:13:42.917 "num_base_bdevs_discovered": 2, 00:13:42.917 "num_base_bdevs_operational": 2, 00:13:42.917 "process": { 00:13:42.917 "type": "rebuild", 00:13:42.917 "target": "spare", 00:13:42.917 "progress": { 00:13:42.917 "blocks": 12288, 00:13:42.917 "percent": 19 00:13:42.917 } 00:13:42.917 }, 00:13:42.917 "base_bdevs_list": [ 00:13:42.917 { 00:13:42.917 "name": "spare", 00:13:42.917 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:42.917 "is_configured": true, 00:13:42.917 "data_offset": 2048, 00:13:42.917 "data_size": 63488 00:13:42.917 }, 00:13:42.917 { 00:13:42.917 "name": "BaseBdev2", 00:13:42.917 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:42.917 "is_configured": true, 00:13:42.917 "data_offset": 2048, 00:13:42.917 "data_size": 63488 00:13:42.917 } 00:13:42.917 ] 00:13:42.917 }' 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.917 [2024-11-08 16:55:12.274900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.917 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.917 [2024-11-08 16:55:12.350116] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.177 [2024-11-08 16:55:12.490210] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.177 [2024-11-08 16:55:12.500388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.177 [2024-11-08 16:55:12.500564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.177 [2024-11-08 16:55:12.500610] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.177 [2024-11-08 16:55:12.514667] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.177 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.177 "name": "raid_bdev1", 00:13:43.177 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:43.177 "strip_size_kb": 0, 00:13:43.177 "state": "online", 00:13:43.177 "raid_level": "raid1", 00:13:43.177 "superblock": true, 00:13:43.177 "num_base_bdevs": 2, 00:13:43.177 "num_base_bdevs_discovered": 1, 00:13:43.177 "num_base_bdevs_operational": 1, 00:13:43.177 "base_bdevs_list": [ 00:13:43.177 { 00:13:43.177 "name": null, 00:13:43.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.178 "is_configured": false, 00:13:43.178 "data_offset": 0, 00:13:43.178 "data_size": 63488 00:13:43.178 }, 00:13:43.178 { 00:13:43.178 "name": "BaseBdev2", 00:13:43.178 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:43.178 "is_configured": true, 00:13:43.178 "data_offset": 2048, 00:13:43.178 "data_size": 63488 00:13:43.178 } 00:13:43.178 ] 00:13:43.178 }' 00:13:43.178 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.178 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.697 132.00 IOPS, 396.00 MiB/s [2024-11-08T16:55:13.225Z] 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.697 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.697 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.697 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.697 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.697 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.697 16:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.697 "name": "raid_bdev1", 00:13:43.697 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:43.697 "strip_size_kb": 0, 00:13:43.697 "state": "online", 00:13:43.697 "raid_level": "raid1", 00:13:43.697 "superblock": true, 00:13:43.697 "num_base_bdevs": 2, 00:13:43.697 "num_base_bdevs_discovered": 1, 00:13:43.697 "num_base_bdevs_operational": 1, 00:13:43.697 "base_bdevs_list": [ 00:13:43.697 { 00:13:43.697 "name": null, 00:13:43.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.697 "is_configured": false, 00:13:43.697 "data_offset": 0, 00:13:43.697 "data_size": 63488 00:13:43.697 }, 00:13:43.697 { 00:13:43.697 "name": "BaseBdev2", 00:13:43.697 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:43.697 "is_configured": true, 00:13:43.697 "data_offset": 2048, 00:13:43.697 "data_size": 63488 00:13:43.697 } 00:13:43.697 ] 00:13:43.697 }' 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.697 [2024-11-08 16:55:13.157683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.697 16:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:43.697 [2024-11-08 16:55:13.214652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:43.697 [2024-11-08 16:55:13.217006] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.957 [2024-11-08 16:55:13.327733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:43.957 [2024-11-08 16:55:13.328307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:43.957 [2024-11-08 16:55:13.455733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.957 [2024-11-08 16:55:13.456064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.785 155.67 IOPS, 467.00 MiB/s [2024-11-08T16:55:14.313Z] [2024-11-08 16:55:14.059404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:44.785 [2024-11-08 16:55:14.060004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.785 "name": "raid_bdev1", 00:13:44.785 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:44.785 "strip_size_kb": 0, 00:13:44.785 "state": "online", 00:13:44.785 "raid_level": "raid1", 00:13:44.785 "superblock": true, 00:13:44.785 "num_base_bdevs": 2, 00:13:44.785 "num_base_bdevs_discovered": 2, 00:13:44.785 "num_base_bdevs_operational": 2, 00:13:44.785 "process": { 00:13:44.785 "type": "rebuild", 00:13:44.785 "target": "spare", 00:13:44.785 "progress": { 00:13:44.785 "blocks": 14336, 00:13:44.785 "percent": 22 00:13:44.785 } 00:13:44.785 }, 00:13:44.785 "base_bdevs_list": [ 00:13:44.785 { 00:13:44.785 "name": "spare", 00:13:44.785 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:44.785 "is_configured": true, 00:13:44.785 "data_offset": 2048, 00:13:44.785 "data_size": 63488 00:13:44.785 }, 00:13:44.785 { 00:13:44.785 "name": "BaseBdev2", 00:13:44.785 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:44.785 "is_configured": true, 00:13:44.785 "data_offset": 2048, 00:13:44.785 "data_size": 63488 00:13:44.785 } 00:13:44.785 ] 00:13:44.785 }' 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.785 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:45.044 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=339 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.044 "name": "raid_bdev1", 00:13:45.044 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:45.044 "strip_size_kb": 0, 00:13:45.044 "state": "online", 00:13:45.044 "raid_level": "raid1", 00:13:45.044 "superblock": true, 00:13:45.044 "num_base_bdevs": 2, 00:13:45.044 "num_base_bdevs_discovered": 2, 00:13:45.044 "num_base_bdevs_operational": 2, 00:13:45.044 "process": { 00:13:45.044 "type": "rebuild", 00:13:45.044 "target": "spare", 00:13:45.044 "progress": { 00:13:45.044 "blocks": 16384, 00:13:45.044 "percent": 25 00:13:45.044 } 00:13:45.044 }, 00:13:45.044 "base_bdevs_list": [ 00:13:45.044 { 00:13:45.044 "name": "spare", 00:13:45.044 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:45.044 "is_configured": true, 00:13:45.044 "data_offset": 2048, 00:13:45.044 "data_size": 63488 00:13:45.044 }, 00:13:45.044 { 00:13:45.044 "name": "BaseBdev2", 00:13:45.044 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:45.044 "is_configured": true, 00:13:45.044 "data_offset": 2048, 00:13:45.044 "data_size": 63488 00:13:45.044 } 00:13:45.044 ] 00:13:45.044 }' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.044 16:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.871 137.00 IOPS, 411.00 MiB/s [2024-11-08T16:55:15.399Z] [2024-11-08 16:55:15.206435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.130 [2024-11-08 16:55:15.417768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.130 "name": "raid_bdev1", 00:13:46.130 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:46.130 "strip_size_kb": 0, 00:13:46.130 "state": "online", 00:13:46.130 "raid_level": "raid1", 00:13:46.130 "superblock": true, 00:13:46.130 "num_base_bdevs": 2, 00:13:46.130 "num_base_bdevs_discovered": 2, 00:13:46.130 "num_base_bdevs_operational": 2, 00:13:46.130 "process": { 00:13:46.130 "type": "rebuild", 00:13:46.130 "target": "spare", 00:13:46.130 "progress": { 00:13:46.130 "blocks": 34816, 00:13:46.130 "percent": 54 00:13:46.130 } 00:13:46.130 }, 00:13:46.130 "base_bdevs_list": [ 00:13:46.130 { 00:13:46.130 "name": "spare", 00:13:46.130 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:46.130 "is_configured": true, 00:13:46.130 "data_offset": 2048, 00:13:46.130 "data_size": 63488 00:13:46.130 }, 00:13:46.130 { 00:13:46.130 "name": "BaseBdev2", 00:13:46.130 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:46.130 "is_configured": true, 00:13:46.130 "data_offset": 2048, 00:13:46.130 "data_size": 63488 00:13:46.130 } 00:13:46.130 ] 00:13:46.130 }' 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.130 16:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.649 120.00 IOPS, 360.00 MiB/s [2024-11-08T16:55:16.177Z] [2024-11-08 16:55:15.970821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:46.908 [2024-11-08 16:55:16.189807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:46.908 [2024-11-08 16:55:16.417283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:47.166 [2024-11-08 16:55:16.627558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:47.166 [2024-11-08 16:55:16.628021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.166 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.166 "name": "raid_bdev1", 00:13:47.166 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:47.167 "strip_size_kb": 0, 00:13:47.167 "state": "online", 00:13:47.167 "raid_level": "raid1", 00:13:47.167 "superblock": true, 00:13:47.167 "num_base_bdevs": 2, 00:13:47.167 "num_base_bdevs_discovered": 2, 00:13:47.167 "num_base_bdevs_operational": 2, 00:13:47.167 "process": { 00:13:47.167 "type": "rebuild", 00:13:47.167 "target": "spare", 00:13:47.167 "progress": { 00:13:47.167 "blocks": 53248, 00:13:47.167 "percent": 83 00:13:47.167 } 00:13:47.167 }, 00:13:47.167 "base_bdevs_list": [ 00:13:47.167 { 00:13:47.167 "name": "spare", 00:13:47.167 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:47.167 "is_configured": true, 00:13:47.167 "data_offset": 2048, 00:13:47.167 "data_size": 63488 00:13:47.167 }, 00:13:47.167 { 00:13:47.167 "name": "BaseBdev2", 00:13:47.167 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:47.167 "is_configured": true, 00:13:47.167 "data_offset": 2048, 00:13:47.167 "data_size": 63488 00:13:47.167 } 00:13:47.167 ] 00:13:47.167 }' 00:13:47.167 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.425 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.425 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.425 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.425 16:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.993 106.00 IOPS, 318.00 MiB/s [2024-11-08T16:55:17.521Z] [2024-11-08 16:55:17.316575] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:47.993 [2024-11-08 16:55:17.423514] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:47.993 [2024-11-08 16:55:17.426612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.251 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.251 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.251 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.251 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.251 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.251 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 95.14 IOPS, 285.43 MiB/s [2024-11-08T16:55:18.038Z] 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.510 "name": "raid_bdev1", 00:13:48.510 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:48.510 "strip_size_kb": 0, 00:13:48.510 "state": "online", 00:13:48.510 "raid_level": "raid1", 00:13:48.510 "superblock": true, 00:13:48.510 "num_base_bdevs": 2, 00:13:48.510 "num_base_bdevs_discovered": 2, 00:13:48.510 "num_base_bdevs_operational": 2, 00:13:48.510 "base_bdevs_list": [ 00:13:48.510 { 00:13:48.510 "name": "spare", 00:13:48.510 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 2048, 00:13:48.510 "data_size": 63488 00:13:48.510 }, 00:13:48.510 { 00:13:48.510 "name": "BaseBdev2", 00:13:48.510 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 2048, 00:13:48.510 "data_size": 63488 00:13:48.510 } 00:13:48.510 ] 00:13:48.510 }' 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.510 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.510 "name": "raid_bdev1", 00:13:48.510 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:48.510 "strip_size_kb": 0, 00:13:48.510 "state": "online", 00:13:48.510 "raid_level": "raid1", 00:13:48.510 "superblock": true, 00:13:48.510 "num_base_bdevs": 2, 00:13:48.510 "num_base_bdevs_discovered": 2, 00:13:48.510 "num_base_bdevs_operational": 2, 00:13:48.510 "base_bdevs_list": [ 00:13:48.510 { 00:13:48.510 "name": "spare", 00:13:48.510 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 2048, 00:13:48.510 "data_size": 63488 00:13:48.510 }, 00:13:48.510 { 00:13:48.510 "name": "BaseBdev2", 00:13:48.510 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:48.510 "is_configured": true, 00:13:48.510 "data_offset": 2048, 00:13:48.510 "data_size": 63488 00:13:48.510 } 00:13:48.511 ] 00:13:48.511 }' 00:13:48.511 16:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.511 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.511 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.769 "name": "raid_bdev1", 00:13:48.769 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:48.769 "strip_size_kb": 0, 00:13:48.769 "state": "online", 00:13:48.769 "raid_level": "raid1", 00:13:48.769 "superblock": true, 00:13:48.769 "num_base_bdevs": 2, 00:13:48.769 "num_base_bdevs_discovered": 2, 00:13:48.769 "num_base_bdevs_operational": 2, 00:13:48.769 "base_bdevs_list": [ 00:13:48.769 { 00:13:48.769 "name": "spare", 00:13:48.769 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:48.769 "is_configured": true, 00:13:48.769 "data_offset": 2048, 00:13:48.769 "data_size": 63488 00:13:48.769 }, 00:13:48.769 { 00:13:48.769 "name": "BaseBdev2", 00:13:48.769 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:48.769 "is_configured": true, 00:13:48.769 "data_offset": 2048, 00:13:48.769 "data_size": 63488 00:13:48.769 } 00:13:48.769 ] 00:13:48.769 }' 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.769 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.028 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.028 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.028 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.028 [2024-11-08 16:55:18.527979] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.028 [2024-11-08 16:55:18.528032] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.288 00:13:49.288 Latency(us) 00:13:49.288 [2024-11-08T16:55:18.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.288 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:49.288 raid_bdev1 : 7.86 89.21 267.62 0.00 0.00 15812.48 316.59 112641.79 00:13:49.288 [2024-11-08T16:55:18.816Z] =================================================================================================================== 00:13:49.288 [2024-11-08T16:55:18.816Z] Total : 89.21 267.62 0.00 0.00 15812.48 316.59 112641.79 00:13:49.288 { 00:13:49.288 "results": [ 00:13:49.288 { 00:13:49.288 "job": "raid_bdev1", 00:13:49.288 "core_mask": "0x1", 00:13:49.288 "workload": "randrw", 00:13:49.288 "percentage": 50, 00:13:49.288 "status": "finished", 00:13:49.288 "queue_depth": 2, 00:13:49.288 "io_size": 3145728, 00:13:49.288 "runtime": 7.858261, 00:13:49.288 "iops": 89.2054870664133, 00:13:49.288 "mibps": 267.6164611992399, 00:13:49.288 "io_failed": 0, 00:13:49.288 "io_timeout": 0, 00:13:49.288 "avg_latency_us": 15812.479031203084, 00:13:49.288 "min_latency_us": 316.5903930131004, 00:13:49.288 "max_latency_us": 112641.78864628822 00:13:49.288 } 00:13:49.288 ], 00:13:49.288 "core_count": 1 00:13:49.288 } 00:13:49.288 [2024-11-08 16:55:18.632769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.288 [2024-11-08 16:55:18.632830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.288 [2024-11-08 16:55:18.632971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.288 [2024-11-08 16:55:18.633004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.288 16:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:49.547 /dev/nbd0 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.547 1+0 records in 00:13:49.547 1+0 records out 00:13:49.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333165 s, 12.3 MB/s 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:49.547 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.548 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:49.808 /dev/nbd1 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.808 1+0 records in 00:13:49.808 1+0 records out 00:13:49.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361464 s, 11.3 MB/s 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.808 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.067 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.327 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.585 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.586 [2024-11-08 16:55:19.979289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.586 [2024-11-08 16:55:19.979382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.586 [2024-11-08 16:55:19.979414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:50.586 [2024-11-08 16:55:19.979427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.586 [2024-11-08 16:55:19.982090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.586 [2024-11-08 16:55:19.982150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.586 [2024-11-08 16:55:19.982273] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:50.586 [2024-11-08 16:55:19.982323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.586 [2024-11-08 16:55:19.982475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.586 spare 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.586 16:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.586 [2024-11-08 16:55:20.082413] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:50.586 [2024-11-08 16:55:20.082489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.586 [2024-11-08 16:55:20.082914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:13:50.586 [2024-11-08 16:55:20.083126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:50.586 [2024-11-08 16:55:20.083165] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:50.586 [2024-11-08 16:55:20.083390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.586 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.864 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.864 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.864 "name": "raid_bdev1", 00:13:50.864 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:50.864 "strip_size_kb": 0, 00:13:50.864 "state": "online", 00:13:50.864 "raid_level": "raid1", 00:13:50.864 "superblock": true, 00:13:50.864 "num_base_bdevs": 2, 00:13:50.864 "num_base_bdevs_discovered": 2, 00:13:50.864 "num_base_bdevs_operational": 2, 00:13:50.864 "base_bdevs_list": [ 00:13:50.864 { 00:13:50.864 "name": "spare", 00:13:50.864 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:50.864 "is_configured": true, 00:13:50.864 "data_offset": 2048, 00:13:50.864 "data_size": 63488 00:13:50.864 }, 00:13:50.864 { 00:13:50.864 "name": "BaseBdev2", 00:13:50.864 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:50.864 "is_configured": true, 00:13:50.864 "data_offset": 2048, 00:13:50.864 "data_size": 63488 00:13:50.864 } 00:13:50.864 ] 00:13:50.864 }' 00:13:50.864 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.864 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.259 "name": "raid_bdev1", 00:13:51.259 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:51.259 "strip_size_kb": 0, 00:13:51.259 "state": "online", 00:13:51.259 "raid_level": "raid1", 00:13:51.259 "superblock": true, 00:13:51.259 "num_base_bdevs": 2, 00:13:51.259 "num_base_bdevs_discovered": 2, 00:13:51.259 "num_base_bdevs_operational": 2, 00:13:51.259 "base_bdevs_list": [ 00:13:51.259 { 00:13:51.259 "name": "spare", 00:13:51.259 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:51.259 "is_configured": true, 00:13:51.259 "data_offset": 2048, 00:13:51.259 "data_size": 63488 00:13:51.259 }, 00:13:51.259 { 00:13:51.259 "name": "BaseBdev2", 00:13:51.259 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:51.259 "is_configured": true, 00:13:51.259 "data_offset": 2048, 00:13:51.259 "data_size": 63488 00:13:51.259 } 00:13:51.259 ] 00:13:51.259 }' 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.259 [2024-11-08 16:55:20.767333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.259 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.519 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.519 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.519 "name": "raid_bdev1", 00:13:51.519 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:51.519 "strip_size_kb": 0, 00:13:51.519 "state": "online", 00:13:51.519 "raid_level": "raid1", 00:13:51.519 "superblock": true, 00:13:51.519 "num_base_bdevs": 2, 00:13:51.519 "num_base_bdevs_discovered": 1, 00:13:51.519 "num_base_bdevs_operational": 1, 00:13:51.519 "base_bdevs_list": [ 00:13:51.519 { 00:13:51.519 "name": null, 00:13:51.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.519 "is_configured": false, 00:13:51.519 "data_offset": 0, 00:13:51.519 "data_size": 63488 00:13:51.519 }, 00:13:51.519 { 00:13:51.519 "name": "BaseBdev2", 00:13:51.519 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:51.519 "is_configured": true, 00:13:51.519 "data_offset": 2048, 00:13:51.519 "data_size": 63488 00:13:51.519 } 00:13:51.519 ] 00:13:51.519 }' 00:13:51.519 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.519 16:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.777 16:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.777 16:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.777 16:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.777 [2024-11-08 16:55:21.207343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.777 [2024-11-08 16:55:21.207657] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.777 [2024-11-08 16:55:21.207735] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:51.777 [2024-11-08 16:55:21.207818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.777 [2024-11-08 16:55:21.212600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:13:51.777 16:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.777 16:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:51.777 [2024-11-08 16:55:21.214954] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.714 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.974 "name": "raid_bdev1", 00:13:52.974 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:52.974 "strip_size_kb": 0, 00:13:52.974 "state": "online", 00:13:52.974 "raid_level": "raid1", 00:13:52.974 "superblock": true, 00:13:52.974 "num_base_bdevs": 2, 00:13:52.974 "num_base_bdevs_discovered": 2, 00:13:52.974 "num_base_bdevs_operational": 2, 00:13:52.974 "process": { 00:13:52.974 "type": "rebuild", 00:13:52.974 "target": "spare", 00:13:52.974 "progress": { 00:13:52.974 "blocks": 20480, 00:13:52.974 "percent": 32 00:13:52.974 } 00:13:52.974 }, 00:13:52.974 "base_bdevs_list": [ 00:13:52.974 { 00:13:52.974 "name": "spare", 00:13:52.974 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:52.974 "is_configured": true, 00:13:52.974 "data_offset": 2048, 00:13:52.974 "data_size": 63488 00:13:52.974 }, 00:13:52.974 { 00:13:52.974 "name": "BaseBdev2", 00:13:52.974 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:52.974 "is_configured": true, 00:13:52.974 "data_offset": 2048, 00:13:52.974 "data_size": 63488 00:13:52.974 } 00:13:52.974 ] 00:13:52.974 }' 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.974 [2024-11-08 16:55:22.383758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.974 [2024-11-08 16:55:22.420706] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.974 [2024-11-08 16:55:22.420822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.974 [2024-11-08 16:55:22.420842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.974 [2024-11-08 16:55:22.420853] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.974 "name": "raid_bdev1", 00:13:52.974 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:52.974 "strip_size_kb": 0, 00:13:52.974 "state": "online", 00:13:52.974 "raid_level": "raid1", 00:13:52.974 "superblock": true, 00:13:52.974 "num_base_bdevs": 2, 00:13:52.974 "num_base_bdevs_discovered": 1, 00:13:52.974 "num_base_bdevs_operational": 1, 00:13:52.974 "base_bdevs_list": [ 00:13:52.974 { 00:13:52.974 "name": null, 00:13:52.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.974 "is_configured": false, 00:13:52.974 "data_offset": 0, 00:13:52.974 "data_size": 63488 00:13:52.974 }, 00:13:52.974 { 00:13:52.974 "name": "BaseBdev2", 00:13:52.974 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:52.974 "is_configured": true, 00:13:52.974 "data_offset": 2048, 00:13:52.974 "data_size": 63488 00:13:52.974 } 00:13:52.974 ] 00:13:52.974 }' 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.974 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.543 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.543 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.543 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.543 [2024-11-08 16:55:22.900989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:53.543 [2024-11-08 16:55:22.901153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.543 [2024-11-08 16:55:22.901186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:53.543 [2024-11-08 16:55:22.901202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.543 [2024-11-08 16:55:22.901742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.543 [2024-11-08 16:55:22.901778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:53.544 [2024-11-08 16:55:22.901887] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:53.544 [2024-11-08 16:55:22.901905] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:53.544 [2024-11-08 16:55:22.901916] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:53.544 [2024-11-08 16:55:22.901948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.544 [2024-11-08 16:55:22.906717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:53.544 spare 00:13:53.544 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.544 16:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:53.544 [2024-11-08 16:55:22.909023] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.483 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.483 "name": "raid_bdev1", 00:13:54.483 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:54.483 "strip_size_kb": 0, 00:13:54.483 "state": "online", 00:13:54.483 "raid_level": "raid1", 00:13:54.483 "superblock": true, 00:13:54.483 "num_base_bdevs": 2, 00:13:54.483 "num_base_bdevs_discovered": 2, 00:13:54.483 "num_base_bdevs_operational": 2, 00:13:54.483 "process": { 00:13:54.483 "type": "rebuild", 00:13:54.483 "target": "spare", 00:13:54.483 "progress": { 00:13:54.484 "blocks": 20480, 00:13:54.484 "percent": 32 00:13:54.484 } 00:13:54.484 }, 00:13:54.484 "base_bdevs_list": [ 00:13:54.484 { 00:13:54.484 "name": "spare", 00:13:54.484 "uuid": "c5c7087b-0ba6-5451-a584-08ef56397261", 00:13:54.484 "is_configured": true, 00:13:54.484 "data_offset": 2048, 00:13:54.484 "data_size": 63488 00:13:54.484 }, 00:13:54.484 { 00:13:54.484 "name": "BaseBdev2", 00:13:54.484 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:54.484 "is_configured": true, 00:13:54.484 "data_offset": 2048, 00:13:54.484 "data_size": 63488 00:13:54.484 } 00:13:54.484 ] 00:13:54.484 }' 00:13:54.484 16:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.484 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.484 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.744 [2024-11-08 16:55:24.045931] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.744 [2024-11-08 16:55:24.114728] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.744 [2024-11-08 16:55:24.114833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.744 [2024-11-08 16:55:24.114856] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.744 [2024-11-08 16:55:24.114865] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.744 "name": "raid_bdev1", 00:13:54.744 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:54.744 "strip_size_kb": 0, 00:13:54.744 "state": "online", 00:13:54.744 "raid_level": "raid1", 00:13:54.744 "superblock": true, 00:13:54.744 "num_base_bdevs": 2, 00:13:54.744 "num_base_bdevs_discovered": 1, 00:13:54.744 "num_base_bdevs_operational": 1, 00:13:54.744 "base_bdevs_list": [ 00:13:54.744 { 00:13:54.744 "name": null, 00:13:54.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.744 "is_configured": false, 00:13:54.744 "data_offset": 0, 00:13:54.744 "data_size": 63488 00:13:54.744 }, 00:13:54.744 { 00:13:54.744 "name": "BaseBdev2", 00:13:54.744 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:54.744 "is_configured": true, 00:13:54.744 "data_offset": 2048, 00:13:54.744 "data_size": 63488 00:13:54.744 } 00:13:54.744 ] 00:13:54.744 }' 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.744 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.316 "name": "raid_bdev1", 00:13:55.316 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:55.316 "strip_size_kb": 0, 00:13:55.316 "state": "online", 00:13:55.316 "raid_level": "raid1", 00:13:55.316 "superblock": true, 00:13:55.316 "num_base_bdevs": 2, 00:13:55.316 "num_base_bdevs_discovered": 1, 00:13:55.316 "num_base_bdevs_operational": 1, 00:13:55.316 "base_bdevs_list": [ 00:13:55.316 { 00:13:55.316 "name": null, 00:13:55.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.316 "is_configured": false, 00:13:55.316 "data_offset": 0, 00:13:55.316 "data_size": 63488 00:13:55.316 }, 00:13:55.316 { 00:13:55.316 "name": "BaseBdev2", 00:13:55.316 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:55.316 "is_configured": true, 00:13:55.316 "data_offset": 2048, 00:13:55.316 "data_size": 63488 00:13:55.316 } 00:13:55.316 ] 00:13:55.316 }' 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.316 [2024-11-08 16:55:24.759373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.316 [2024-11-08 16:55:24.759465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.316 [2024-11-08 16:55:24.759495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:55.316 [2024-11-08 16:55:24.759507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.316 [2024-11-08 16:55:24.760023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.316 [2024-11-08 16:55:24.760047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.316 [2024-11-08 16:55:24.760160] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:55.316 [2024-11-08 16:55:24.760177] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:55.316 [2024-11-08 16:55:24.760188] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:55.316 [2024-11-08 16:55:24.760205] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:55.316 BaseBdev1 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.316 16:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.257 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.517 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.517 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.517 "name": "raid_bdev1", 00:13:56.517 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:56.517 "strip_size_kb": 0, 00:13:56.517 "state": "online", 00:13:56.517 "raid_level": "raid1", 00:13:56.517 "superblock": true, 00:13:56.517 "num_base_bdevs": 2, 00:13:56.517 "num_base_bdevs_discovered": 1, 00:13:56.517 "num_base_bdevs_operational": 1, 00:13:56.517 "base_bdevs_list": [ 00:13:56.517 { 00:13:56.517 "name": null, 00:13:56.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.517 "is_configured": false, 00:13:56.517 "data_offset": 0, 00:13:56.517 "data_size": 63488 00:13:56.517 }, 00:13:56.517 { 00:13:56.517 "name": "BaseBdev2", 00:13:56.517 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:56.517 "is_configured": true, 00:13:56.517 "data_offset": 2048, 00:13:56.517 "data_size": 63488 00:13:56.517 } 00:13:56.517 ] 00:13:56.517 }' 00:13:56.517 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.517 16:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.777 "name": "raid_bdev1", 00:13:56.777 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:56.777 "strip_size_kb": 0, 00:13:56.777 "state": "online", 00:13:56.777 "raid_level": "raid1", 00:13:56.777 "superblock": true, 00:13:56.777 "num_base_bdevs": 2, 00:13:56.777 "num_base_bdevs_discovered": 1, 00:13:56.777 "num_base_bdevs_operational": 1, 00:13:56.777 "base_bdevs_list": [ 00:13:56.777 { 00:13:56.777 "name": null, 00:13:56.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.777 "is_configured": false, 00:13:56.777 "data_offset": 0, 00:13:56.777 "data_size": 63488 00:13:56.777 }, 00:13:56.777 { 00:13:56.777 "name": "BaseBdev2", 00:13:56.777 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:56.777 "is_configured": true, 00:13:56.777 "data_offset": 2048, 00:13:56.777 "data_size": 63488 00:13:56.777 } 00:13:56.777 ] 00:13:56.777 }' 00:13:56.777 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 [2024-11-08 16:55:26.399348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.037 [2024-11-08 16:55:26.399651] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:57.037 [2024-11-08 16:55:26.399723] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:57.037 request: 00:13:57.037 { 00:13:57.037 "base_bdev": "BaseBdev1", 00:13:57.037 "raid_bdev": "raid_bdev1", 00:13:57.037 "method": "bdev_raid_add_base_bdev", 00:13:57.037 "req_id": 1 00:13:57.037 } 00:13:57.037 Got JSON-RPC error response 00:13:57.037 response: 00:13:57.037 { 00:13:57.037 "code": -22, 00:13:57.037 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:57.037 } 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.037 16:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.976 "name": "raid_bdev1", 00:13:57.976 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:57.976 "strip_size_kb": 0, 00:13:57.976 "state": "online", 00:13:57.976 "raid_level": "raid1", 00:13:57.976 "superblock": true, 00:13:57.976 "num_base_bdevs": 2, 00:13:57.976 "num_base_bdevs_discovered": 1, 00:13:57.976 "num_base_bdevs_operational": 1, 00:13:57.976 "base_bdevs_list": [ 00:13:57.976 { 00:13:57.976 "name": null, 00:13:57.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.976 "is_configured": false, 00:13:57.976 "data_offset": 0, 00:13:57.976 "data_size": 63488 00:13:57.976 }, 00:13:57.976 { 00:13:57.976 "name": "BaseBdev2", 00:13:57.976 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:57.976 "is_configured": true, 00:13:57.976 "data_offset": 2048, 00:13:57.976 "data_size": 63488 00:13:57.976 } 00:13:57.976 ] 00:13:57.976 }' 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.976 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.544 "name": "raid_bdev1", 00:13:58.544 "uuid": "a52bfbb9-5039-4339-aedf-9983918ab2a4", 00:13:58.544 "strip_size_kb": 0, 00:13:58.544 "state": "online", 00:13:58.544 "raid_level": "raid1", 00:13:58.544 "superblock": true, 00:13:58.544 "num_base_bdevs": 2, 00:13:58.544 "num_base_bdevs_discovered": 1, 00:13:58.544 "num_base_bdevs_operational": 1, 00:13:58.544 "base_bdevs_list": [ 00:13:58.544 { 00:13:58.544 "name": null, 00:13:58.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.544 "is_configured": false, 00:13:58.544 "data_offset": 0, 00:13:58.544 "data_size": 63488 00:13:58.544 }, 00:13:58.544 { 00:13:58.544 "name": "BaseBdev2", 00:13:58.544 "uuid": "23004cdc-df82-5c3a-ae98-71788ff90728", 00:13:58.544 "is_configured": true, 00:13:58.544 "data_offset": 2048, 00:13:58.544 "data_size": 63488 00:13:58.544 } 00:13:58.544 ] 00:13:58.544 }' 00:13:58.544 16:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87564 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87564 ']' 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87564 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.544 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87564 00:13:58.803 killing process with pid 87564 00:13:58.803 Received shutdown signal, test time was about 17.334735 seconds 00:13:58.803 00:13:58.803 Latency(us) 00:13:58.803 [2024-11-08T16:55:28.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.803 [2024-11-08T16:55:28.331Z] =================================================================================================================== 00:13:58.803 [2024-11-08T16:55:28.331Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.803 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.803 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.803 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87564' 00:13:58.803 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87564 00:13:58.803 [2024-11-08 16:55:28.085945] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.803 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87564 00:13:58.803 [2024-11-08 16:55:28.086129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.803 [2024-11-08 16:55:28.086194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.803 [2024-11-08 16:55:28.086211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:58.803 [2024-11-08 16:55:28.114980] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.061 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:59.061 00:13:59.061 real 0m18.794s 00:13:59.061 user 0m25.409s 00:13:59.061 sys 0m2.177s 00:13:59.061 ************************************ 00:13:59.061 END TEST raid_rebuild_test_sb_io 00:13:59.061 ************************************ 00:13:59.061 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.061 16:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.061 16:55:28 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:59.061 16:55:28 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:59.061 16:55:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:59.061 16:55:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.061 16:55:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.061 ************************************ 00:13:59.061 START TEST raid_rebuild_test 00:13:59.061 ************************************ 00:13:59.061 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88237 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88237 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88237 ']' 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.062 16:55:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.062 [2024-11-08 16:55:28.548334] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:59.062 [2024-11-08 16:55:28.548591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:59.062 Zero copy mechanism will not be used. 00:13:59.062 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88237 ] 00:13:59.320 [2024-11-08 16:55:28.716432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.320 [2024-11-08 16:55:28.772158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.320 [2024-11-08 16:55:28.817894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.320 [2024-11-08 16:55:28.818039] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 BaseBdev1_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 [2024-11-08 16:55:29.594940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:00.256 [2024-11-08 16:55:29.595165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.256 [2024-11-08 16:55:29.595263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.256 [2024-11-08 16:55:29.595330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.256 [2024-11-08 16:55:29.598077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.256 [2024-11-08 16:55:29.598197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.256 BaseBdev1 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 BaseBdev2_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 [2024-11-08 16:55:29.635616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:00.256 [2024-11-08 16:55:29.635738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.256 [2024-11-08 16:55:29.635774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.256 [2024-11-08 16:55:29.635787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.256 [2024-11-08 16:55:29.638679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.256 [2024-11-08 16:55:29.638738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.256 BaseBdev2 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 BaseBdev3_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 [2024-11-08 16:55:29.665338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:00.256 [2024-11-08 16:55:29.665431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.256 [2024-11-08 16:55:29.665468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:00.256 [2024-11-08 16:55:29.665479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.256 [2024-11-08 16:55:29.668166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.256 [2024-11-08 16:55:29.668228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:00.256 BaseBdev3 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 BaseBdev4_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 [2024-11-08 16:55:29.694905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:00.256 [2024-11-08 16:55:29.695003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.256 [2024-11-08 16:55:29.695039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:00.256 [2024-11-08 16:55:29.695050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.256 [2024-11-08 16:55:29.697674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.256 [2024-11-08 16:55:29.697726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:00.256 BaseBdev4 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.256 spare_malloc 00:14:00.256 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.257 spare_delay 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.257 [2024-11-08 16:55:29.736522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.257 [2024-11-08 16:55:29.736623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.257 [2024-11-08 16:55:29.736675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:00.257 [2024-11-08 16:55:29.736687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.257 [2024-11-08 16:55:29.739367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.257 [2024-11-08 16:55:29.739424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.257 spare 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.257 [2024-11-08 16:55:29.748630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.257 [2024-11-08 16:55:29.750975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.257 [2024-11-08 16:55:29.751078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.257 [2024-11-08 16:55:29.751131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.257 [2024-11-08 16:55:29.751269] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:00.257 [2024-11-08 16:55:29.751282] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:00.257 [2024-11-08 16:55:29.751657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:00.257 [2024-11-08 16:55:29.751849] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:00.257 [2024-11-08 16:55:29.751866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:00.257 [2024-11-08 16:55:29.752052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.257 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.516 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.516 "name": "raid_bdev1", 00:14:00.516 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:00.516 "strip_size_kb": 0, 00:14:00.516 "state": "online", 00:14:00.516 "raid_level": "raid1", 00:14:00.516 "superblock": false, 00:14:00.516 "num_base_bdevs": 4, 00:14:00.516 "num_base_bdevs_discovered": 4, 00:14:00.516 "num_base_bdevs_operational": 4, 00:14:00.516 "base_bdevs_list": [ 00:14:00.516 { 00:14:00.516 "name": "BaseBdev1", 00:14:00.516 "uuid": "542b5533-335b-5fbf-ad32-b57e2b964157", 00:14:00.516 "is_configured": true, 00:14:00.516 "data_offset": 0, 00:14:00.516 "data_size": 65536 00:14:00.516 }, 00:14:00.516 { 00:14:00.516 "name": "BaseBdev2", 00:14:00.516 "uuid": "2f1bd4ab-9697-59df-b4e5-5389c648cf32", 00:14:00.516 "is_configured": true, 00:14:00.516 "data_offset": 0, 00:14:00.516 "data_size": 65536 00:14:00.516 }, 00:14:00.516 { 00:14:00.516 "name": "BaseBdev3", 00:14:00.516 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:00.516 "is_configured": true, 00:14:00.516 "data_offset": 0, 00:14:00.516 "data_size": 65536 00:14:00.516 }, 00:14:00.516 { 00:14:00.516 "name": "BaseBdev4", 00:14:00.516 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:00.516 "is_configured": true, 00:14:00.516 "data_offset": 0, 00:14:00.516 "data_size": 65536 00:14:00.516 } 00:14:00.516 ] 00:14:00.516 }' 00:14:00.516 16:55:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.516 16:55:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.774 [2024-11-08 16:55:30.220229] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.774 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.032 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:01.032 [2024-11-08 16:55:30.555470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:01.290 /dev/nbd0 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.290 1+0 records in 00:14:01.290 1+0 records out 00:14:01.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411348 s, 10.0 MB/s 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:01.290 16:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:07.860 65536+0 records in 00:14:07.860 65536+0 records out 00:14:07.860 33554432 bytes (34 MB, 32 MiB) copied, 6.56685 s, 5.1 MB/s 00:14:07.860 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:07.860 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.860 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.860 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.861 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.861 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.861 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.121 [2024-11-08 16:55:37.445831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.121 [2024-11-08 16:55:37.481858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.121 "name": "raid_bdev1", 00:14:08.121 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:08.121 "strip_size_kb": 0, 00:14:08.121 "state": "online", 00:14:08.121 "raid_level": "raid1", 00:14:08.121 "superblock": false, 00:14:08.121 "num_base_bdevs": 4, 00:14:08.121 "num_base_bdevs_discovered": 3, 00:14:08.121 "num_base_bdevs_operational": 3, 00:14:08.121 "base_bdevs_list": [ 00:14:08.121 { 00:14:08.121 "name": null, 00:14:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.121 "is_configured": false, 00:14:08.121 "data_offset": 0, 00:14:08.121 "data_size": 65536 00:14:08.121 }, 00:14:08.121 { 00:14:08.121 "name": "BaseBdev2", 00:14:08.121 "uuid": "2f1bd4ab-9697-59df-b4e5-5389c648cf32", 00:14:08.121 "is_configured": true, 00:14:08.121 "data_offset": 0, 00:14:08.121 "data_size": 65536 00:14:08.121 }, 00:14:08.121 { 00:14:08.121 "name": "BaseBdev3", 00:14:08.121 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:08.121 "is_configured": true, 00:14:08.121 "data_offset": 0, 00:14:08.121 "data_size": 65536 00:14:08.121 }, 00:14:08.121 { 00:14:08.121 "name": "BaseBdev4", 00:14:08.121 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:08.121 "is_configured": true, 00:14:08.121 "data_offset": 0, 00:14:08.121 "data_size": 65536 00:14:08.121 } 00:14:08.121 ] 00:14:08.121 }' 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.121 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.690 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.690 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 [2024-11-08 16:55:37.965121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.690 [2024-11-08 16:55:37.968854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:08.690 [2024-11-08 16:55:37.971177] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.690 16:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.690 16:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.630 16:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.630 "name": "raid_bdev1", 00:14:09.630 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:09.630 "strip_size_kb": 0, 00:14:09.630 "state": "online", 00:14:09.630 "raid_level": "raid1", 00:14:09.630 "superblock": false, 00:14:09.630 "num_base_bdevs": 4, 00:14:09.630 "num_base_bdevs_discovered": 4, 00:14:09.630 "num_base_bdevs_operational": 4, 00:14:09.630 "process": { 00:14:09.630 "type": "rebuild", 00:14:09.630 "target": "spare", 00:14:09.630 "progress": { 00:14:09.630 "blocks": 20480, 00:14:09.630 "percent": 31 00:14:09.630 } 00:14:09.630 }, 00:14:09.630 "base_bdevs_list": [ 00:14:09.630 { 00:14:09.630 "name": "spare", 00:14:09.630 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:09.630 "is_configured": true, 00:14:09.630 "data_offset": 0, 00:14:09.630 "data_size": 65536 00:14:09.630 }, 00:14:09.630 { 00:14:09.630 "name": "BaseBdev2", 00:14:09.630 "uuid": "2f1bd4ab-9697-59df-b4e5-5389c648cf32", 00:14:09.630 "is_configured": true, 00:14:09.630 "data_offset": 0, 00:14:09.630 "data_size": 65536 00:14:09.630 }, 00:14:09.630 { 00:14:09.630 "name": "BaseBdev3", 00:14:09.630 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:09.630 "is_configured": true, 00:14:09.630 "data_offset": 0, 00:14:09.630 "data_size": 65536 00:14:09.630 }, 00:14:09.630 { 00:14:09.630 "name": "BaseBdev4", 00:14:09.630 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:09.630 "is_configured": true, 00:14:09.630 "data_offset": 0, 00:14:09.630 "data_size": 65536 00:14:09.630 } 00:14:09.630 ] 00:14:09.630 }' 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.630 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.630 [2024-11-08 16:55:39.123190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.890 [2024-11-08 16:55:39.177627] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.890 [2024-11-08 16:55:39.177841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.890 [2024-11-08 16:55:39.177870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.890 [2024-11-08 16:55:39.177881] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.890 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.890 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:09.890 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.890 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.890 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.891 "name": "raid_bdev1", 00:14:09.891 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:09.891 "strip_size_kb": 0, 00:14:09.891 "state": "online", 00:14:09.891 "raid_level": "raid1", 00:14:09.891 "superblock": false, 00:14:09.891 "num_base_bdevs": 4, 00:14:09.891 "num_base_bdevs_discovered": 3, 00:14:09.891 "num_base_bdevs_operational": 3, 00:14:09.891 "base_bdevs_list": [ 00:14:09.891 { 00:14:09.891 "name": null, 00:14:09.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.891 "is_configured": false, 00:14:09.891 "data_offset": 0, 00:14:09.891 "data_size": 65536 00:14:09.891 }, 00:14:09.891 { 00:14:09.891 "name": "BaseBdev2", 00:14:09.891 "uuid": "2f1bd4ab-9697-59df-b4e5-5389c648cf32", 00:14:09.891 "is_configured": true, 00:14:09.891 "data_offset": 0, 00:14:09.891 "data_size": 65536 00:14:09.891 }, 00:14:09.891 { 00:14:09.891 "name": "BaseBdev3", 00:14:09.891 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:09.891 "is_configured": true, 00:14:09.891 "data_offset": 0, 00:14:09.891 "data_size": 65536 00:14:09.891 }, 00:14:09.891 { 00:14:09.891 "name": "BaseBdev4", 00:14:09.891 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:09.891 "is_configured": true, 00:14:09.891 "data_offset": 0, 00:14:09.891 "data_size": 65536 00:14:09.891 } 00:14:09.891 ] 00:14:09.891 }' 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.891 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.150 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.409 "name": "raid_bdev1", 00:14:10.409 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:10.409 "strip_size_kb": 0, 00:14:10.409 "state": "online", 00:14:10.409 "raid_level": "raid1", 00:14:10.409 "superblock": false, 00:14:10.409 "num_base_bdevs": 4, 00:14:10.409 "num_base_bdevs_discovered": 3, 00:14:10.409 "num_base_bdevs_operational": 3, 00:14:10.409 "base_bdevs_list": [ 00:14:10.409 { 00:14:10.409 "name": null, 00:14:10.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.409 "is_configured": false, 00:14:10.409 "data_offset": 0, 00:14:10.409 "data_size": 65536 00:14:10.409 }, 00:14:10.409 { 00:14:10.409 "name": "BaseBdev2", 00:14:10.409 "uuid": "2f1bd4ab-9697-59df-b4e5-5389c648cf32", 00:14:10.409 "is_configured": true, 00:14:10.409 "data_offset": 0, 00:14:10.409 "data_size": 65536 00:14:10.409 }, 00:14:10.409 { 00:14:10.409 "name": "BaseBdev3", 00:14:10.409 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:10.409 "is_configured": true, 00:14:10.409 "data_offset": 0, 00:14:10.409 "data_size": 65536 00:14:10.409 }, 00:14:10.409 { 00:14:10.409 "name": "BaseBdev4", 00:14:10.409 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:10.409 "is_configured": true, 00:14:10.409 "data_offset": 0, 00:14:10.409 "data_size": 65536 00:14:10.409 } 00:14:10.409 ] 00:14:10.409 }' 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.409 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.409 [2024-11-08 16:55:39.789605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.409 [2024-11-08 16:55:39.793313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:10.409 [2024-11-08 16:55:39.795802] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.410 16:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.410 16:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.348 "name": "raid_bdev1", 00:14:11.348 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:11.348 "strip_size_kb": 0, 00:14:11.348 "state": "online", 00:14:11.348 "raid_level": "raid1", 00:14:11.348 "superblock": false, 00:14:11.348 "num_base_bdevs": 4, 00:14:11.348 "num_base_bdevs_discovered": 4, 00:14:11.348 "num_base_bdevs_operational": 4, 00:14:11.348 "process": { 00:14:11.348 "type": "rebuild", 00:14:11.348 "target": "spare", 00:14:11.348 "progress": { 00:14:11.348 "blocks": 20480, 00:14:11.348 "percent": 31 00:14:11.348 } 00:14:11.348 }, 00:14:11.348 "base_bdevs_list": [ 00:14:11.348 { 00:14:11.348 "name": "spare", 00:14:11.348 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:11.348 "is_configured": true, 00:14:11.348 "data_offset": 0, 00:14:11.348 "data_size": 65536 00:14:11.348 }, 00:14:11.348 { 00:14:11.348 "name": "BaseBdev2", 00:14:11.348 "uuid": "2f1bd4ab-9697-59df-b4e5-5389c648cf32", 00:14:11.348 "is_configured": true, 00:14:11.348 "data_offset": 0, 00:14:11.348 "data_size": 65536 00:14:11.348 }, 00:14:11.348 { 00:14:11.348 "name": "BaseBdev3", 00:14:11.348 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:11.348 "is_configured": true, 00:14:11.348 "data_offset": 0, 00:14:11.348 "data_size": 65536 00:14:11.348 }, 00:14:11.348 { 00:14:11.348 "name": "BaseBdev4", 00:14:11.348 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:11.348 "is_configured": true, 00:14:11.348 "data_offset": 0, 00:14:11.348 "data_size": 65536 00:14:11.348 } 00:14:11.348 ] 00:14:11.348 }' 00:14:11.348 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 16:55:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 [2024-11-08 16:55:40.959421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:11.608 [2024-11-08 16:55:41.001366] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.608 "name": "raid_bdev1", 00:14:11.608 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:11.608 "strip_size_kb": 0, 00:14:11.608 "state": "online", 00:14:11.608 "raid_level": "raid1", 00:14:11.608 "superblock": false, 00:14:11.608 "num_base_bdevs": 4, 00:14:11.608 "num_base_bdevs_discovered": 3, 00:14:11.608 "num_base_bdevs_operational": 3, 00:14:11.608 "process": { 00:14:11.608 "type": "rebuild", 00:14:11.608 "target": "spare", 00:14:11.608 "progress": { 00:14:11.608 "blocks": 24576, 00:14:11.608 "percent": 37 00:14:11.608 } 00:14:11.608 }, 00:14:11.608 "base_bdevs_list": [ 00:14:11.608 { 00:14:11.608 "name": "spare", 00:14:11.608 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:11.608 "is_configured": true, 00:14:11.608 "data_offset": 0, 00:14:11.608 "data_size": 65536 00:14:11.608 }, 00:14:11.608 { 00:14:11.608 "name": null, 00:14:11.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.608 "is_configured": false, 00:14:11.608 "data_offset": 0, 00:14:11.608 "data_size": 65536 00:14:11.608 }, 00:14:11.608 { 00:14:11.608 "name": "BaseBdev3", 00:14:11.608 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:11.608 "is_configured": true, 00:14:11.608 "data_offset": 0, 00:14:11.608 "data_size": 65536 00:14:11.608 }, 00:14:11.608 { 00:14:11.608 "name": "BaseBdev4", 00:14:11.608 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:11.608 "is_configured": true, 00:14:11.608 "data_offset": 0, 00:14:11.608 "data_size": 65536 00:14:11.608 } 00:14:11.608 ] 00:14:11.608 }' 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.608 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=366 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.868 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.868 "name": "raid_bdev1", 00:14:11.868 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:11.868 "strip_size_kb": 0, 00:14:11.868 "state": "online", 00:14:11.868 "raid_level": "raid1", 00:14:11.868 "superblock": false, 00:14:11.868 "num_base_bdevs": 4, 00:14:11.868 "num_base_bdevs_discovered": 3, 00:14:11.868 "num_base_bdevs_operational": 3, 00:14:11.868 "process": { 00:14:11.868 "type": "rebuild", 00:14:11.868 "target": "spare", 00:14:11.868 "progress": { 00:14:11.868 "blocks": 26624, 00:14:11.868 "percent": 40 00:14:11.868 } 00:14:11.869 }, 00:14:11.869 "base_bdevs_list": [ 00:14:11.869 { 00:14:11.869 "name": "spare", 00:14:11.869 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:11.869 "is_configured": true, 00:14:11.869 "data_offset": 0, 00:14:11.869 "data_size": 65536 00:14:11.869 }, 00:14:11.869 { 00:14:11.869 "name": null, 00:14:11.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.869 "is_configured": false, 00:14:11.869 "data_offset": 0, 00:14:11.869 "data_size": 65536 00:14:11.869 }, 00:14:11.869 { 00:14:11.869 "name": "BaseBdev3", 00:14:11.869 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:11.869 "is_configured": true, 00:14:11.869 "data_offset": 0, 00:14:11.869 "data_size": 65536 00:14:11.869 }, 00:14:11.869 { 00:14:11.869 "name": "BaseBdev4", 00:14:11.869 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:11.869 "is_configured": true, 00:14:11.869 "data_offset": 0, 00:14:11.869 "data_size": 65536 00:14:11.869 } 00:14:11.869 ] 00:14:11.869 }' 00:14:11.869 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.869 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.869 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.869 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.869 16:55:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.807 16:55:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.151 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.151 "name": "raid_bdev1", 00:14:13.151 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:13.151 "strip_size_kb": 0, 00:14:13.151 "state": "online", 00:14:13.151 "raid_level": "raid1", 00:14:13.151 "superblock": false, 00:14:13.151 "num_base_bdevs": 4, 00:14:13.151 "num_base_bdevs_discovered": 3, 00:14:13.151 "num_base_bdevs_operational": 3, 00:14:13.151 "process": { 00:14:13.151 "type": "rebuild", 00:14:13.151 "target": "spare", 00:14:13.151 "progress": { 00:14:13.151 "blocks": 51200, 00:14:13.151 "percent": 78 00:14:13.151 } 00:14:13.151 }, 00:14:13.151 "base_bdevs_list": [ 00:14:13.151 { 00:14:13.151 "name": "spare", 00:14:13.151 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:13.151 "is_configured": true, 00:14:13.151 "data_offset": 0, 00:14:13.151 "data_size": 65536 00:14:13.151 }, 00:14:13.151 { 00:14:13.151 "name": null, 00:14:13.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.151 "is_configured": false, 00:14:13.151 "data_offset": 0, 00:14:13.151 "data_size": 65536 00:14:13.151 }, 00:14:13.151 { 00:14:13.151 "name": "BaseBdev3", 00:14:13.151 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:13.151 "is_configured": true, 00:14:13.151 "data_offset": 0, 00:14:13.151 "data_size": 65536 00:14:13.151 }, 00:14:13.151 { 00:14:13.151 "name": "BaseBdev4", 00:14:13.151 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:13.151 "is_configured": true, 00:14:13.151 "data_offset": 0, 00:14:13.151 "data_size": 65536 00:14:13.151 } 00:14:13.151 ] 00:14:13.151 }' 00:14:13.151 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.151 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.151 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.151 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.151 16:55:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.719 [2024-11-08 16:55:43.011280] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:13.719 [2024-11-08 16:55:43.011527] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:13.719 [2024-11-08 16:55:43.011627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.979 "name": "raid_bdev1", 00:14:13.979 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:13.979 "strip_size_kb": 0, 00:14:13.979 "state": "online", 00:14:13.979 "raid_level": "raid1", 00:14:13.979 "superblock": false, 00:14:13.979 "num_base_bdevs": 4, 00:14:13.979 "num_base_bdevs_discovered": 3, 00:14:13.979 "num_base_bdevs_operational": 3, 00:14:13.979 "base_bdevs_list": [ 00:14:13.979 { 00:14:13.979 "name": "spare", 00:14:13.979 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:13.979 "is_configured": true, 00:14:13.979 "data_offset": 0, 00:14:13.979 "data_size": 65536 00:14:13.979 }, 00:14:13.979 { 00:14:13.979 "name": null, 00:14:13.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.979 "is_configured": false, 00:14:13.979 "data_offset": 0, 00:14:13.979 "data_size": 65536 00:14:13.979 }, 00:14:13.979 { 00:14:13.979 "name": "BaseBdev3", 00:14:13.979 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:13.979 "is_configured": true, 00:14:13.979 "data_offset": 0, 00:14:13.979 "data_size": 65536 00:14:13.979 }, 00:14:13.979 { 00:14:13.979 "name": "BaseBdev4", 00:14:13.979 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:13.979 "is_configured": true, 00:14:13.979 "data_offset": 0, 00:14:13.979 "data_size": 65536 00:14:13.979 } 00:14:13.979 ] 00:14:13.979 }' 00:14:13.979 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.245 "name": "raid_bdev1", 00:14:14.245 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:14.245 "strip_size_kb": 0, 00:14:14.245 "state": "online", 00:14:14.245 "raid_level": "raid1", 00:14:14.245 "superblock": false, 00:14:14.245 "num_base_bdevs": 4, 00:14:14.245 "num_base_bdevs_discovered": 3, 00:14:14.245 "num_base_bdevs_operational": 3, 00:14:14.245 "base_bdevs_list": [ 00:14:14.245 { 00:14:14.245 "name": "spare", 00:14:14.245 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:14.245 "is_configured": true, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 }, 00:14:14.245 { 00:14:14.245 "name": null, 00:14:14.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.245 "is_configured": false, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 }, 00:14:14.245 { 00:14:14.245 "name": "BaseBdev3", 00:14:14.245 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:14.245 "is_configured": true, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 }, 00:14:14.245 { 00:14:14.245 "name": "BaseBdev4", 00:14:14.245 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:14.245 "is_configured": true, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 } 00:14:14.245 ] 00:14:14.245 }' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.245 "name": "raid_bdev1", 00:14:14.245 "uuid": "3f9b843b-8b98-4770-a798-4c0aad631274", 00:14:14.245 "strip_size_kb": 0, 00:14:14.245 "state": "online", 00:14:14.245 "raid_level": "raid1", 00:14:14.245 "superblock": false, 00:14:14.245 "num_base_bdevs": 4, 00:14:14.245 "num_base_bdevs_discovered": 3, 00:14:14.245 "num_base_bdevs_operational": 3, 00:14:14.245 "base_bdevs_list": [ 00:14:14.245 { 00:14:14.245 "name": "spare", 00:14:14.245 "uuid": "a1e313c7-baa0-5e8b-9c51-f38c9aba7453", 00:14:14.245 "is_configured": true, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 }, 00:14:14.245 { 00:14:14.245 "name": null, 00:14:14.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.245 "is_configured": false, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 }, 00:14:14.245 { 00:14:14.245 "name": "BaseBdev3", 00:14:14.245 "uuid": "99a1a4da-62b5-5134-a32d-5397c5f583e1", 00:14:14.245 "is_configured": true, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 }, 00:14:14.245 { 00:14:14.245 "name": "BaseBdev4", 00:14:14.245 "uuid": "9a7896dd-99be-5694-a6df-6f551c50075f", 00:14:14.245 "is_configured": true, 00:14:14.245 "data_offset": 0, 00:14:14.245 "data_size": 65536 00:14:14.245 } 00:14:14.245 ] 00:14:14.245 }' 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.245 16:55:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.813 [2024-11-08 16:55:44.102260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.813 [2024-11-08 16:55:44.102314] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.813 [2024-11-08 16:55:44.102425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.813 [2024-11-08 16:55:44.102522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.813 [2024-11-08 16:55:44.102536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:14.813 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:15.073 /dev/nbd0 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.073 1+0 records in 00:14:15.073 1+0 records out 00:14:15.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295194 s, 13.9 MB/s 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.073 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:15.333 /dev/nbd1 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.333 1+0 records in 00:14:15.333 1+0 records out 00:14:15.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003275 s, 12.5 MB/s 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.333 16:55:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.593 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88237 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88237 ']' 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88237 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.853 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88237 00:14:15.853 killing process with pid 88237 00:14:15.853 Received shutdown signal, test time was about 60.000000 seconds 00:14:15.853 00:14:15.853 Latency(us) 00:14:15.853 [2024-11-08T16:55:45.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.853 [2024-11-08T16:55:45.381Z] =================================================================================================================== 00:14:15.853 [2024-11-08T16:55:45.382Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.854 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.854 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.854 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88237' 00:14:15.854 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88237 00:14:15.854 [2024-11-08 16:55:45.345895] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.854 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88237 00:14:16.112 [2024-11-08 16:55:45.399585] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:16.371 ************************************ 00:14:16.371 END TEST raid_rebuild_test 00:14:16.371 ************************************ 00:14:16.371 00:14:16.371 real 0m17.208s 00:14:16.371 user 0m19.377s 00:14:16.371 sys 0m3.349s 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 16:55:45 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:16.371 16:55:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:16.371 16:55:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.371 16:55:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 ************************************ 00:14:16.371 START TEST raid_rebuild_test_sb 00:14:16.371 ************************************ 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88679 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88679 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88679 ']' 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.371 16:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 [2024-11-08 16:55:45.826001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:16.371 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:16.371 Zero copy mechanism will not be used. 00:14:16.371 [2024-11-08 16:55:45.826243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88679 ] 00:14:16.630 [2024-11-08 16:55:45.991505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.630 [2024-11-08 16:55:46.043517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.630 [2024-11-08 16:55:46.086537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.630 [2024-11-08 16:55:46.086572] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.232 BaseBdev1_malloc 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.232 [2024-11-08 16:55:46.730133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:17.232 [2024-11-08 16:55:46.730288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.232 [2024-11-08 16:55:46.730325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:17.232 [2024-11-08 16:55:46.730344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.232 [2024-11-08 16:55:46.732868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.232 [2024-11-08 16:55:46.732907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.232 BaseBdev1 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.232 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.491 BaseBdev2_malloc 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.491 [2024-11-08 16:55:46.767127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:17.491 [2024-11-08 16:55:46.767205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.491 [2024-11-08 16:55:46.767230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:17.491 [2024-11-08 16:55:46.767240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.491 [2024-11-08 16:55:46.769756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.491 [2024-11-08 16:55:46.769858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:17.491 BaseBdev2 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.491 BaseBdev3_malloc 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.491 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.491 [2024-11-08 16:55:46.796399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:17.491 [2024-11-08 16:55:46.796467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.492 [2024-11-08 16:55:46.796495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:17.492 [2024-11-08 16:55:46.796505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.492 [2024-11-08 16:55:46.799009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.492 [2024-11-08 16:55:46.799049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:17.492 BaseBdev3 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 BaseBdev4_malloc 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 [2024-11-08 16:55:46.825562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:17.492 [2024-11-08 16:55:46.825671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.492 [2024-11-08 16:55:46.825703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:17.492 [2024-11-08 16:55:46.825715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.492 [2024-11-08 16:55:46.828137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.492 [2024-11-08 16:55:46.828265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:17.492 BaseBdev4 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 spare_malloc 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 spare_delay 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 [2024-11-08 16:55:46.866842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.492 [2024-11-08 16:55:46.866986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.492 [2024-11-08 16:55:46.867020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:17.492 [2024-11-08 16:55:46.867031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.492 [2024-11-08 16:55:46.869469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.492 [2024-11-08 16:55:46.869510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.492 spare 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 [2024-11-08 16:55:46.878934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.492 [2024-11-08 16:55:46.881097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.492 [2024-11-08 16:55:46.881178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.492 [2024-11-08 16:55:46.881228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:17.492 [2024-11-08 16:55:46.881428] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:17.492 [2024-11-08 16:55:46.881441] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:17.492 [2024-11-08 16:55:46.881779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:17.492 [2024-11-08 16:55:46.881970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:17.492 [2024-11-08 16:55:46.881990] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:17.492 [2024-11-08 16:55:46.882173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.492 "name": "raid_bdev1", 00:14:17.492 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:17.492 "strip_size_kb": 0, 00:14:17.492 "state": "online", 00:14:17.492 "raid_level": "raid1", 00:14:17.492 "superblock": true, 00:14:17.492 "num_base_bdevs": 4, 00:14:17.492 "num_base_bdevs_discovered": 4, 00:14:17.492 "num_base_bdevs_operational": 4, 00:14:17.492 "base_bdevs_list": [ 00:14:17.492 { 00:14:17.492 "name": "BaseBdev1", 00:14:17.492 "uuid": "5476d64c-f135-5cea-99b8-4c4bba5fbf1b", 00:14:17.492 "is_configured": true, 00:14:17.492 "data_offset": 2048, 00:14:17.492 "data_size": 63488 00:14:17.492 }, 00:14:17.492 { 00:14:17.492 "name": "BaseBdev2", 00:14:17.492 "uuid": "3dc5a77c-4135-553f-9321-a8ebddee8302", 00:14:17.492 "is_configured": true, 00:14:17.492 "data_offset": 2048, 00:14:17.492 "data_size": 63488 00:14:17.492 }, 00:14:17.492 { 00:14:17.492 "name": "BaseBdev3", 00:14:17.492 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:17.492 "is_configured": true, 00:14:17.492 "data_offset": 2048, 00:14:17.492 "data_size": 63488 00:14:17.492 }, 00:14:17.492 { 00:14:17.492 "name": "BaseBdev4", 00:14:17.492 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:17.492 "is_configured": true, 00:14:17.492 "data_offset": 2048, 00:14:17.492 "data_size": 63488 00:14:17.492 } 00:14:17.492 ] 00:14:17.492 }' 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.492 16:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.060 [2024-11-08 16:55:47.378437] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.060 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:18.320 [2024-11-08 16:55:47.665668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:18.320 /dev/nbd0 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.320 1+0 records in 00:14:18.320 1+0 records out 00:14:18.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531617 s, 7.7 MB/s 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:18.320 16:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:24.909 63488+0 records in 00:14:24.909 63488+0 records out 00:14:24.909 32505856 bytes (33 MB, 31 MiB) copied, 5.81446 s, 5.6 MB/s 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.909 [2024-11-08 16:55:53.808804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.909 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.909 [2024-11-08 16:55:53.824991] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.910 "name": "raid_bdev1", 00:14:24.910 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:24.910 "strip_size_kb": 0, 00:14:24.910 "state": "online", 00:14:24.910 "raid_level": "raid1", 00:14:24.910 "superblock": true, 00:14:24.910 "num_base_bdevs": 4, 00:14:24.910 "num_base_bdevs_discovered": 3, 00:14:24.910 "num_base_bdevs_operational": 3, 00:14:24.910 "base_bdevs_list": [ 00:14:24.910 { 00:14:24.910 "name": null, 00:14:24.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.910 "is_configured": false, 00:14:24.910 "data_offset": 0, 00:14:24.910 "data_size": 63488 00:14:24.910 }, 00:14:24.910 { 00:14:24.910 "name": "BaseBdev2", 00:14:24.910 "uuid": "3dc5a77c-4135-553f-9321-a8ebddee8302", 00:14:24.910 "is_configured": true, 00:14:24.910 "data_offset": 2048, 00:14:24.910 "data_size": 63488 00:14:24.910 }, 00:14:24.910 { 00:14:24.910 "name": "BaseBdev3", 00:14:24.910 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:24.910 "is_configured": true, 00:14:24.910 "data_offset": 2048, 00:14:24.910 "data_size": 63488 00:14:24.910 }, 00:14:24.910 { 00:14:24.910 "name": "BaseBdev4", 00:14:24.910 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:24.910 "is_configured": true, 00:14:24.910 "data_offset": 2048, 00:14:24.910 "data_size": 63488 00:14:24.910 } 00:14:24.910 ] 00:14:24.910 }' 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.910 16:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.910 16:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.910 16:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.910 16:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.910 [2024-11-08 16:55:54.292246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.910 [2024-11-08 16:55:54.295892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:24.910 [2024-11-08 16:55:54.298111] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.910 16:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.910 16:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.845 "name": "raid_bdev1", 00:14:25.845 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:25.845 "strip_size_kb": 0, 00:14:25.845 "state": "online", 00:14:25.845 "raid_level": "raid1", 00:14:25.845 "superblock": true, 00:14:25.845 "num_base_bdevs": 4, 00:14:25.845 "num_base_bdevs_discovered": 4, 00:14:25.845 "num_base_bdevs_operational": 4, 00:14:25.845 "process": { 00:14:25.845 "type": "rebuild", 00:14:25.845 "target": "spare", 00:14:25.845 "progress": { 00:14:25.845 "blocks": 20480, 00:14:25.845 "percent": 32 00:14:25.845 } 00:14:25.845 }, 00:14:25.845 "base_bdevs_list": [ 00:14:25.845 { 00:14:25.845 "name": "spare", 00:14:25.845 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:25.845 "is_configured": true, 00:14:25.845 "data_offset": 2048, 00:14:25.845 "data_size": 63488 00:14:25.845 }, 00:14:25.845 { 00:14:25.845 "name": "BaseBdev2", 00:14:25.845 "uuid": "3dc5a77c-4135-553f-9321-a8ebddee8302", 00:14:25.845 "is_configured": true, 00:14:25.845 "data_offset": 2048, 00:14:25.845 "data_size": 63488 00:14:25.845 }, 00:14:25.845 { 00:14:25.845 "name": "BaseBdev3", 00:14:25.845 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:25.845 "is_configured": true, 00:14:25.845 "data_offset": 2048, 00:14:25.845 "data_size": 63488 00:14:25.845 }, 00:14:25.845 { 00:14:25.845 "name": "BaseBdev4", 00:14:25.845 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:25.845 "is_configured": true, 00:14:25.845 "data_offset": 2048, 00:14:25.845 "data_size": 63488 00:14:25.845 } 00:14:25.845 ] 00:14:25.845 }' 00:14:25.845 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.103 [2024-11-08 16:55:55.465146] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.103 [2024-11-08 16:55:55.504295] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.103 [2024-11-08 16:55:55.504395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.103 [2024-11-08 16:55:55.504417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.103 [2024-11-08 16:55:55.504438] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.103 "name": "raid_bdev1", 00:14:26.103 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:26.103 "strip_size_kb": 0, 00:14:26.103 "state": "online", 00:14:26.103 "raid_level": "raid1", 00:14:26.103 "superblock": true, 00:14:26.103 "num_base_bdevs": 4, 00:14:26.103 "num_base_bdevs_discovered": 3, 00:14:26.103 "num_base_bdevs_operational": 3, 00:14:26.103 "base_bdevs_list": [ 00:14:26.103 { 00:14:26.103 "name": null, 00:14:26.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.103 "is_configured": false, 00:14:26.103 "data_offset": 0, 00:14:26.103 "data_size": 63488 00:14:26.103 }, 00:14:26.103 { 00:14:26.103 "name": "BaseBdev2", 00:14:26.103 "uuid": "3dc5a77c-4135-553f-9321-a8ebddee8302", 00:14:26.103 "is_configured": true, 00:14:26.103 "data_offset": 2048, 00:14:26.103 "data_size": 63488 00:14:26.103 }, 00:14:26.103 { 00:14:26.103 "name": "BaseBdev3", 00:14:26.103 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:26.103 "is_configured": true, 00:14:26.103 "data_offset": 2048, 00:14:26.103 "data_size": 63488 00:14:26.103 }, 00:14:26.103 { 00:14:26.103 "name": "BaseBdev4", 00:14:26.103 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:26.103 "is_configured": true, 00:14:26.103 "data_offset": 2048, 00:14:26.103 "data_size": 63488 00:14:26.103 } 00:14:26.103 ] 00:14:26.103 }' 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.103 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.670 16:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.670 "name": "raid_bdev1", 00:14:26.670 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:26.670 "strip_size_kb": 0, 00:14:26.670 "state": "online", 00:14:26.670 "raid_level": "raid1", 00:14:26.670 "superblock": true, 00:14:26.670 "num_base_bdevs": 4, 00:14:26.670 "num_base_bdevs_discovered": 3, 00:14:26.670 "num_base_bdevs_operational": 3, 00:14:26.670 "base_bdevs_list": [ 00:14:26.670 { 00:14:26.670 "name": null, 00:14:26.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.670 "is_configured": false, 00:14:26.670 "data_offset": 0, 00:14:26.670 "data_size": 63488 00:14:26.670 }, 00:14:26.670 { 00:14:26.670 "name": "BaseBdev2", 00:14:26.670 "uuid": "3dc5a77c-4135-553f-9321-a8ebddee8302", 00:14:26.670 "is_configured": true, 00:14:26.670 "data_offset": 2048, 00:14:26.670 "data_size": 63488 00:14:26.670 }, 00:14:26.670 { 00:14:26.670 "name": "BaseBdev3", 00:14:26.670 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:26.670 "is_configured": true, 00:14:26.670 "data_offset": 2048, 00:14:26.670 "data_size": 63488 00:14:26.670 }, 00:14:26.670 { 00:14:26.670 "name": "BaseBdev4", 00:14:26.670 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:26.670 "is_configured": true, 00:14:26.670 "data_offset": 2048, 00:14:26.670 "data_size": 63488 00:14:26.670 } 00:14:26.670 ] 00:14:26.670 }' 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.670 [2024-11-08 16:55:56.144021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.670 [2024-11-08 16:55:56.147588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:26.670 [2024-11-08 16:55:56.150019] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.670 16:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.073 "name": "raid_bdev1", 00:14:28.073 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:28.073 "strip_size_kb": 0, 00:14:28.073 "state": "online", 00:14:28.073 "raid_level": "raid1", 00:14:28.073 "superblock": true, 00:14:28.073 "num_base_bdevs": 4, 00:14:28.073 "num_base_bdevs_discovered": 4, 00:14:28.073 "num_base_bdevs_operational": 4, 00:14:28.073 "process": { 00:14:28.073 "type": "rebuild", 00:14:28.073 "target": "spare", 00:14:28.073 "progress": { 00:14:28.073 "blocks": 20480, 00:14:28.073 "percent": 32 00:14:28.073 } 00:14:28.073 }, 00:14:28.073 "base_bdevs_list": [ 00:14:28.073 { 00:14:28.073 "name": "spare", 00:14:28.073 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:28.073 "is_configured": true, 00:14:28.073 "data_offset": 2048, 00:14:28.073 "data_size": 63488 00:14:28.073 }, 00:14:28.073 { 00:14:28.073 "name": "BaseBdev2", 00:14:28.073 "uuid": "3dc5a77c-4135-553f-9321-a8ebddee8302", 00:14:28.073 "is_configured": true, 00:14:28.073 "data_offset": 2048, 00:14:28.073 "data_size": 63488 00:14:28.073 }, 00:14:28.073 { 00:14:28.073 "name": "BaseBdev3", 00:14:28.073 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:28.073 "is_configured": true, 00:14:28.073 "data_offset": 2048, 00:14:28.073 "data_size": 63488 00:14:28.073 }, 00:14:28.073 { 00:14:28.073 "name": "BaseBdev4", 00:14:28.073 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:28.073 "is_configured": true, 00:14:28.073 "data_offset": 2048, 00:14:28.073 "data_size": 63488 00:14:28.073 } 00:14:28.073 ] 00:14:28.073 }' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:28.073 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.073 [2024-11-08 16:55:57.304864] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.073 [2024-11-08 16:55:57.455606] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.073 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.073 "name": "raid_bdev1", 00:14:28.073 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:28.073 "strip_size_kb": 0, 00:14:28.073 "state": "online", 00:14:28.073 "raid_level": "raid1", 00:14:28.073 "superblock": true, 00:14:28.073 "num_base_bdevs": 4, 00:14:28.073 "num_base_bdevs_discovered": 3, 00:14:28.073 "num_base_bdevs_operational": 3, 00:14:28.073 "process": { 00:14:28.073 "type": "rebuild", 00:14:28.073 "target": "spare", 00:14:28.073 "progress": { 00:14:28.073 "blocks": 24576, 00:14:28.073 "percent": 38 00:14:28.073 } 00:14:28.073 }, 00:14:28.073 "base_bdevs_list": [ 00:14:28.073 { 00:14:28.073 "name": "spare", 00:14:28.073 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:28.073 "is_configured": true, 00:14:28.073 "data_offset": 2048, 00:14:28.073 "data_size": 63488 00:14:28.073 }, 00:14:28.073 { 00:14:28.073 "name": null, 00:14:28.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.074 "is_configured": false, 00:14:28.074 "data_offset": 0, 00:14:28.074 "data_size": 63488 00:14:28.074 }, 00:14:28.074 { 00:14:28.074 "name": "BaseBdev3", 00:14:28.074 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:28.074 "is_configured": true, 00:14:28.074 "data_offset": 2048, 00:14:28.074 "data_size": 63488 00:14:28.074 }, 00:14:28.074 { 00:14:28.074 "name": "BaseBdev4", 00:14:28.074 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:28.074 "is_configured": true, 00:14:28.074 "data_offset": 2048, 00:14:28.074 "data_size": 63488 00:14:28.074 } 00:14:28.074 ] 00:14:28.074 }' 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.074 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.333 "name": "raid_bdev1", 00:14:28.333 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:28.333 "strip_size_kb": 0, 00:14:28.333 "state": "online", 00:14:28.333 "raid_level": "raid1", 00:14:28.333 "superblock": true, 00:14:28.333 "num_base_bdevs": 4, 00:14:28.333 "num_base_bdevs_discovered": 3, 00:14:28.333 "num_base_bdevs_operational": 3, 00:14:28.333 "process": { 00:14:28.333 "type": "rebuild", 00:14:28.333 "target": "spare", 00:14:28.333 "progress": { 00:14:28.333 "blocks": 26624, 00:14:28.333 "percent": 41 00:14:28.333 } 00:14:28.333 }, 00:14:28.333 "base_bdevs_list": [ 00:14:28.333 { 00:14:28.333 "name": "spare", 00:14:28.333 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:28.333 "is_configured": true, 00:14:28.333 "data_offset": 2048, 00:14:28.333 "data_size": 63488 00:14:28.333 }, 00:14:28.333 { 00:14:28.333 "name": null, 00:14:28.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.333 "is_configured": false, 00:14:28.333 "data_offset": 0, 00:14:28.333 "data_size": 63488 00:14:28.333 }, 00:14:28.333 { 00:14:28.333 "name": "BaseBdev3", 00:14:28.333 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:28.333 "is_configured": true, 00:14:28.333 "data_offset": 2048, 00:14:28.333 "data_size": 63488 00:14:28.333 }, 00:14:28.333 { 00:14:28.333 "name": "BaseBdev4", 00:14:28.333 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:28.333 "is_configured": true, 00:14:28.333 "data_offset": 2048, 00:14:28.333 "data_size": 63488 00:14:28.333 } 00:14:28.333 ] 00:14:28.333 }' 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.333 16:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.272 "name": "raid_bdev1", 00:14:29.272 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:29.272 "strip_size_kb": 0, 00:14:29.272 "state": "online", 00:14:29.272 "raid_level": "raid1", 00:14:29.272 "superblock": true, 00:14:29.272 "num_base_bdevs": 4, 00:14:29.272 "num_base_bdevs_discovered": 3, 00:14:29.272 "num_base_bdevs_operational": 3, 00:14:29.272 "process": { 00:14:29.272 "type": "rebuild", 00:14:29.272 "target": "spare", 00:14:29.272 "progress": { 00:14:29.272 "blocks": 49152, 00:14:29.272 "percent": 77 00:14:29.272 } 00:14:29.272 }, 00:14:29.272 "base_bdevs_list": [ 00:14:29.272 { 00:14:29.272 "name": "spare", 00:14:29.272 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:29.272 "is_configured": true, 00:14:29.272 "data_offset": 2048, 00:14:29.272 "data_size": 63488 00:14:29.272 }, 00:14:29.272 { 00:14:29.272 "name": null, 00:14:29.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.272 "is_configured": false, 00:14:29.272 "data_offset": 0, 00:14:29.272 "data_size": 63488 00:14:29.272 }, 00:14:29.272 { 00:14:29.272 "name": "BaseBdev3", 00:14:29.272 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:29.272 "is_configured": true, 00:14:29.272 "data_offset": 2048, 00:14:29.272 "data_size": 63488 00:14:29.272 }, 00:14:29.272 { 00:14:29.272 "name": "BaseBdev4", 00:14:29.272 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:29.272 "is_configured": true, 00:14:29.272 "data_offset": 2048, 00:14:29.272 "data_size": 63488 00:14:29.272 } 00:14:29.272 ] 00:14:29.272 }' 00:14:29.272 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.532 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.532 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.532 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.532 16:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.101 [2024-11-08 16:55:59.364827] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.101 [2024-11-08 16:55:59.365041] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.101 [2024-11-08 16:55:59.365196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.363 16:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.623 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.624 "name": "raid_bdev1", 00:14:30.624 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:30.624 "strip_size_kb": 0, 00:14:30.624 "state": "online", 00:14:30.624 "raid_level": "raid1", 00:14:30.624 "superblock": true, 00:14:30.624 "num_base_bdevs": 4, 00:14:30.624 "num_base_bdevs_discovered": 3, 00:14:30.624 "num_base_bdevs_operational": 3, 00:14:30.624 "base_bdevs_list": [ 00:14:30.624 { 00:14:30.624 "name": "spare", 00:14:30.624 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:30.624 "is_configured": true, 00:14:30.624 "data_offset": 2048, 00:14:30.624 "data_size": 63488 00:14:30.624 }, 00:14:30.624 { 00:14:30.624 "name": null, 00:14:30.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.624 "is_configured": false, 00:14:30.624 "data_offset": 0, 00:14:30.624 "data_size": 63488 00:14:30.624 }, 00:14:30.624 { 00:14:30.624 "name": "BaseBdev3", 00:14:30.624 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:30.624 "is_configured": true, 00:14:30.624 "data_offset": 2048, 00:14:30.624 "data_size": 63488 00:14:30.624 }, 00:14:30.624 { 00:14:30.624 "name": "BaseBdev4", 00:14:30.624 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:30.624 "is_configured": true, 00:14:30.624 "data_offset": 2048, 00:14:30.624 "data_size": 63488 00:14:30.624 } 00:14:30.624 ] 00:14:30.624 }' 00:14:30.624 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.624 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.624 16:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.624 "name": "raid_bdev1", 00:14:30.624 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:30.624 "strip_size_kb": 0, 00:14:30.624 "state": "online", 00:14:30.624 "raid_level": "raid1", 00:14:30.624 "superblock": true, 00:14:30.624 "num_base_bdevs": 4, 00:14:30.624 "num_base_bdevs_discovered": 3, 00:14:30.624 "num_base_bdevs_operational": 3, 00:14:30.624 "base_bdevs_list": [ 00:14:30.624 { 00:14:30.624 "name": "spare", 00:14:30.624 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:30.624 "is_configured": true, 00:14:30.624 "data_offset": 2048, 00:14:30.624 "data_size": 63488 00:14:30.624 }, 00:14:30.624 { 00:14:30.624 "name": null, 00:14:30.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.624 "is_configured": false, 00:14:30.624 "data_offset": 0, 00:14:30.624 "data_size": 63488 00:14:30.624 }, 00:14:30.624 { 00:14:30.624 "name": "BaseBdev3", 00:14:30.624 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:30.624 "is_configured": true, 00:14:30.624 "data_offset": 2048, 00:14:30.624 "data_size": 63488 00:14:30.624 }, 00:14:30.624 { 00:14:30.624 "name": "BaseBdev4", 00:14:30.624 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:30.624 "is_configured": true, 00:14:30.624 "data_offset": 2048, 00:14:30.624 "data_size": 63488 00:14:30.624 } 00:14:30.624 ] 00:14:30.624 }' 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.624 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.883 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.883 "name": "raid_bdev1", 00:14:30.883 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:30.883 "strip_size_kb": 0, 00:14:30.883 "state": "online", 00:14:30.883 "raid_level": "raid1", 00:14:30.883 "superblock": true, 00:14:30.883 "num_base_bdevs": 4, 00:14:30.883 "num_base_bdevs_discovered": 3, 00:14:30.883 "num_base_bdevs_operational": 3, 00:14:30.883 "base_bdevs_list": [ 00:14:30.883 { 00:14:30.883 "name": "spare", 00:14:30.883 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:30.883 "is_configured": true, 00:14:30.883 "data_offset": 2048, 00:14:30.883 "data_size": 63488 00:14:30.883 }, 00:14:30.883 { 00:14:30.883 "name": null, 00:14:30.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.883 "is_configured": false, 00:14:30.883 "data_offset": 0, 00:14:30.883 "data_size": 63488 00:14:30.883 }, 00:14:30.883 { 00:14:30.883 "name": "BaseBdev3", 00:14:30.883 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:30.883 "is_configured": true, 00:14:30.883 "data_offset": 2048, 00:14:30.883 "data_size": 63488 00:14:30.883 }, 00:14:30.883 { 00:14:30.883 "name": "BaseBdev4", 00:14:30.883 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:30.883 "is_configured": true, 00:14:30.883 "data_offset": 2048, 00:14:30.884 "data_size": 63488 00:14:30.884 } 00:14:30.884 ] 00:14:30.884 }' 00:14:30.884 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.884 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.143 [2024-11-08 16:56:00.655410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.143 [2024-11-08 16:56:00.655579] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.143 [2024-11-08 16:56:00.655757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.143 [2024-11-08 16:56:00.655901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.143 [2024-11-08 16:56:00.655973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.143 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.402 16:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:31.661 /dev/nbd0 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.661 1+0 records in 00:14:31.661 1+0 records out 00:14:31.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337582 s, 12.1 MB/s 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.661 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.662 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:31.662 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.662 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.662 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:31.921 /dev/nbd1 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.921 1+0 records in 00:14:31.921 1+0 records out 00:14:31.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044215 s, 9.3 MB/s 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.921 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.488 16:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.488 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.488 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.488 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.488 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.488 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.747 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 [2024-11-08 16:56:02.029341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.748 [2024-11-08 16:56:02.029421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.748 [2024-11-08 16:56:02.029447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:32.748 [2024-11-08 16:56:02.029467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.748 [2024-11-08 16:56:02.031938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.748 [2024-11-08 16:56:02.031992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.748 [2024-11-08 16:56:02.032101] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:32.748 [2024-11-08 16:56:02.032162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.748 [2024-11-08 16:56:02.032300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.748 [2024-11-08 16:56:02.032427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:32.748 spare 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 [2024-11-08 16:56:02.132346] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:32.748 [2024-11-08 16:56:02.132429] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.748 [2024-11-08 16:56:02.132877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:32.748 [2024-11-08 16:56:02.133093] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:32.748 [2024-11-08 16:56:02.133114] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:32.748 [2024-11-08 16:56:02.133310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.748 "name": "raid_bdev1", 00:14:32.748 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:32.748 "strip_size_kb": 0, 00:14:32.748 "state": "online", 00:14:32.748 "raid_level": "raid1", 00:14:32.748 "superblock": true, 00:14:32.748 "num_base_bdevs": 4, 00:14:32.748 "num_base_bdevs_discovered": 3, 00:14:32.748 "num_base_bdevs_operational": 3, 00:14:32.748 "base_bdevs_list": [ 00:14:32.748 { 00:14:32.748 "name": "spare", 00:14:32.748 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:32.748 "is_configured": true, 00:14:32.748 "data_offset": 2048, 00:14:32.748 "data_size": 63488 00:14:32.748 }, 00:14:32.748 { 00:14:32.748 "name": null, 00:14:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.748 "is_configured": false, 00:14:32.748 "data_offset": 2048, 00:14:32.748 "data_size": 63488 00:14:32.748 }, 00:14:32.748 { 00:14:32.748 "name": "BaseBdev3", 00:14:32.748 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:32.748 "is_configured": true, 00:14:32.748 "data_offset": 2048, 00:14:32.748 "data_size": 63488 00:14:32.748 }, 00:14:32.748 { 00:14:32.748 "name": "BaseBdev4", 00:14:32.748 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:32.748 "is_configured": true, 00:14:32.748 "data_offset": 2048, 00:14:32.748 "data_size": 63488 00:14:32.748 } 00:14:32.748 ] 00:14:32.748 }' 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.748 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.317 "name": "raid_bdev1", 00:14:33.317 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:33.317 "strip_size_kb": 0, 00:14:33.317 "state": "online", 00:14:33.317 "raid_level": "raid1", 00:14:33.317 "superblock": true, 00:14:33.317 "num_base_bdevs": 4, 00:14:33.317 "num_base_bdevs_discovered": 3, 00:14:33.317 "num_base_bdevs_operational": 3, 00:14:33.317 "base_bdevs_list": [ 00:14:33.317 { 00:14:33.317 "name": "spare", 00:14:33.317 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:33.317 "is_configured": true, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 }, 00:14:33.317 { 00:14:33.317 "name": null, 00:14:33.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.317 "is_configured": false, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 }, 00:14:33.317 { 00:14:33.317 "name": "BaseBdev3", 00:14:33.317 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:33.317 "is_configured": true, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 }, 00:14:33.317 { 00:14:33.317 "name": "BaseBdev4", 00:14:33.317 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:33.317 "is_configured": true, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 } 00:14:33.317 ] 00:14:33.317 }' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.317 [2024-11-08 16:56:02.788228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.317 "name": "raid_bdev1", 00:14:33.317 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:33.317 "strip_size_kb": 0, 00:14:33.317 "state": "online", 00:14:33.317 "raid_level": "raid1", 00:14:33.317 "superblock": true, 00:14:33.317 "num_base_bdevs": 4, 00:14:33.317 "num_base_bdevs_discovered": 2, 00:14:33.317 "num_base_bdevs_operational": 2, 00:14:33.317 "base_bdevs_list": [ 00:14:33.317 { 00:14:33.317 "name": null, 00:14:33.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.317 "is_configured": false, 00:14:33.317 "data_offset": 0, 00:14:33.317 "data_size": 63488 00:14:33.317 }, 00:14:33.317 { 00:14:33.317 "name": null, 00:14:33.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.317 "is_configured": false, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 }, 00:14:33.317 { 00:14:33.317 "name": "BaseBdev3", 00:14:33.317 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:33.317 "is_configured": true, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 }, 00:14:33.317 { 00:14:33.317 "name": "BaseBdev4", 00:14:33.317 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:33.317 "is_configured": true, 00:14:33.317 "data_offset": 2048, 00:14:33.317 "data_size": 63488 00:14:33.317 } 00:14:33.317 ] 00:14:33.317 }' 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.317 16:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.885 16:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.885 16:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.885 16:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.885 [2024-11-08 16:56:03.251527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.885 [2024-11-08 16:56:03.251763] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:33.885 [2024-11-08 16:56:03.251797] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:33.885 [2024-11-08 16:56:03.251838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.885 [2024-11-08 16:56:03.255228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:33.885 [2024-11-08 16:56:03.257368] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.885 16:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.885 16:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.821 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.822 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.822 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.822 "name": "raid_bdev1", 00:14:34.822 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:34.822 "strip_size_kb": 0, 00:14:34.822 "state": "online", 00:14:34.822 "raid_level": "raid1", 00:14:34.822 "superblock": true, 00:14:34.822 "num_base_bdevs": 4, 00:14:34.822 "num_base_bdevs_discovered": 3, 00:14:34.822 "num_base_bdevs_operational": 3, 00:14:34.822 "process": { 00:14:34.822 "type": "rebuild", 00:14:34.822 "target": "spare", 00:14:34.822 "progress": { 00:14:34.822 "blocks": 20480, 00:14:34.822 "percent": 32 00:14:34.822 } 00:14:34.822 }, 00:14:34.822 "base_bdevs_list": [ 00:14:34.822 { 00:14:34.822 "name": "spare", 00:14:34.822 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:34.822 "is_configured": true, 00:14:34.822 "data_offset": 2048, 00:14:34.822 "data_size": 63488 00:14:34.822 }, 00:14:34.822 { 00:14:34.822 "name": null, 00:14:34.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.822 "is_configured": false, 00:14:34.822 "data_offset": 2048, 00:14:34.822 "data_size": 63488 00:14:34.822 }, 00:14:34.822 { 00:14:34.822 "name": "BaseBdev3", 00:14:34.822 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:34.822 "is_configured": true, 00:14:34.822 "data_offset": 2048, 00:14:34.822 "data_size": 63488 00:14:34.822 }, 00:14:34.822 { 00:14:34.822 "name": "BaseBdev4", 00:14:34.822 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:34.822 "is_configured": true, 00:14:34.822 "data_offset": 2048, 00:14:34.822 "data_size": 63488 00:14:34.822 } 00:14:34.822 ] 00:14:34.822 }' 00:14:34.822 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.081 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.081 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.081 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.081 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.081 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.081 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.081 [2024-11-08 16:56:04.400187] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.082 [2024-11-08 16:56:04.462876] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.082 [2024-11-08 16:56:04.463003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.082 [2024-11-08 16:56:04.463023] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.082 [2024-11-08 16:56:04.463034] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.082 "name": "raid_bdev1", 00:14:35.082 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:35.082 "strip_size_kb": 0, 00:14:35.082 "state": "online", 00:14:35.082 "raid_level": "raid1", 00:14:35.082 "superblock": true, 00:14:35.082 "num_base_bdevs": 4, 00:14:35.082 "num_base_bdevs_discovered": 2, 00:14:35.082 "num_base_bdevs_operational": 2, 00:14:35.082 "base_bdevs_list": [ 00:14:35.082 { 00:14:35.082 "name": null, 00:14:35.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.082 "is_configured": false, 00:14:35.082 "data_offset": 0, 00:14:35.082 "data_size": 63488 00:14:35.082 }, 00:14:35.082 { 00:14:35.082 "name": null, 00:14:35.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.082 "is_configured": false, 00:14:35.082 "data_offset": 2048, 00:14:35.082 "data_size": 63488 00:14:35.082 }, 00:14:35.082 { 00:14:35.082 "name": "BaseBdev3", 00:14:35.082 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:35.082 "is_configured": true, 00:14:35.082 "data_offset": 2048, 00:14:35.082 "data_size": 63488 00:14:35.082 }, 00:14:35.082 { 00:14:35.082 "name": "BaseBdev4", 00:14:35.082 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:35.082 "is_configured": true, 00:14:35.082 "data_offset": 2048, 00:14:35.082 "data_size": 63488 00:14:35.082 } 00:14:35.082 ] 00:14:35.082 }' 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.082 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.653 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.653 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.653 [2024-11-08 16:56:04.898510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.653 [2024-11-08 16:56:04.898602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.653 [2024-11-08 16:56:04.898633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:35.653 [2024-11-08 16:56:04.898646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.653 [2024-11-08 16:56:04.899120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.653 [2024-11-08 16:56:04.899150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.653 [2024-11-08 16:56:04.899248] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:35.653 [2024-11-08 16:56:04.899268] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:35.653 [2024-11-08 16:56:04.899280] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:35.653 [2024-11-08 16:56:04.899305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.653 [2024-11-08 16:56:04.902486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:35.653 spare 00:14:35.653 16:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.653 16:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:35.653 [2024-11-08 16:56:04.904422] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.590 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.590 "name": "raid_bdev1", 00:14:36.590 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:36.590 "strip_size_kb": 0, 00:14:36.590 "state": "online", 00:14:36.590 "raid_level": "raid1", 00:14:36.590 "superblock": true, 00:14:36.590 "num_base_bdevs": 4, 00:14:36.590 "num_base_bdevs_discovered": 3, 00:14:36.590 "num_base_bdevs_operational": 3, 00:14:36.590 "process": { 00:14:36.590 "type": "rebuild", 00:14:36.590 "target": "spare", 00:14:36.590 "progress": { 00:14:36.590 "blocks": 20480, 00:14:36.590 "percent": 32 00:14:36.590 } 00:14:36.590 }, 00:14:36.590 "base_bdevs_list": [ 00:14:36.590 { 00:14:36.590 "name": "spare", 00:14:36.590 "uuid": "8962fabf-7dce-5e3a-9ced-0db84fdb050a", 00:14:36.590 "is_configured": true, 00:14:36.590 "data_offset": 2048, 00:14:36.590 "data_size": 63488 00:14:36.590 }, 00:14:36.590 { 00:14:36.590 "name": null, 00:14:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.590 "is_configured": false, 00:14:36.590 "data_offset": 2048, 00:14:36.590 "data_size": 63488 00:14:36.590 }, 00:14:36.590 { 00:14:36.590 "name": "BaseBdev3", 00:14:36.590 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:36.590 "is_configured": true, 00:14:36.590 "data_offset": 2048, 00:14:36.590 "data_size": 63488 00:14:36.590 }, 00:14:36.590 { 00:14:36.590 "name": "BaseBdev4", 00:14:36.591 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:36.591 "is_configured": true, 00:14:36.591 "data_offset": 2048, 00:14:36.591 "data_size": 63488 00:14:36.591 } 00:14:36.591 ] 00:14:36.591 }' 00:14:36.591 16:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.591 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.591 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.591 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.591 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.591 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.591 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.591 [2024-11-08 16:56:06.061743] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.591 [2024-11-08 16:56:06.109584] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:36.591 [2024-11-08 16:56:06.109674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.591 [2024-11-08 16:56:06.109697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.591 [2024-11-08 16:56:06.109707] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.851 "name": "raid_bdev1", 00:14:36.851 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:36.851 "strip_size_kb": 0, 00:14:36.851 "state": "online", 00:14:36.851 "raid_level": "raid1", 00:14:36.851 "superblock": true, 00:14:36.851 "num_base_bdevs": 4, 00:14:36.851 "num_base_bdevs_discovered": 2, 00:14:36.851 "num_base_bdevs_operational": 2, 00:14:36.851 "base_bdevs_list": [ 00:14:36.851 { 00:14:36.851 "name": null, 00:14:36.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.851 "is_configured": false, 00:14:36.851 "data_offset": 0, 00:14:36.851 "data_size": 63488 00:14:36.851 }, 00:14:36.851 { 00:14:36.851 "name": null, 00:14:36.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.851 "is_configured": false, 00:14:36.851 "data_offset": 2048, 00:14:36.851 "data_size": 63488 00:14:36.851 }, 00:14:36.851 { 00:14:36.851 "name": "BaseBdev3", 00:14:36.851 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:36.851 "is_configured": true, 00:14:36.851 "data_offset": 2048, 00:14:36.851 "data_size": 63488 00:14:36.851 }, 00:14:36.851 { 00:14:36.851 "name": "BaseBdev4", 00:14:36.851 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:36.851 "is_configured": true, 00:14:36.851 "data_offset": 2048, 00:14:36.851 "data_size": 63488 00:14:36.851 } 00:14:36.851 ] 00:14:36.851 }' 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.851 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.110 "name": "raid_bdev1", 00:14:37.110 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:37.110 "strip_size_kb": 0, 00:14:37.110 "state": "online", 00:14:37.110 "raid_level": "raid1", 00:14:37.110 "superblock": true, 00:14:37.110 "num_base_bdevs": 4, 00:14:37.110 "num_base_bdevs_discovered": 2, 00:14:37.110 "num_base_bdevs_operational": 2, 00:14:37.110 "base_bdevs_list": [ 00:14:37.110 { 00:14:37.110 "name": null, 00:14:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.110 "is_configured": false, 00:14:37.110 "data_offset": 0, 00:14:37.110 "data_size": 63488 00:14:37.110 }, 00:14:37.110 { 00:14:37.110 "name": null, 00:14:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.110 "is_configured": false, 00:14:37.110 "data_offset": 2048, 00:14:37.110 "data_size": 63488 00:14:37.110 }, 00:14:37.110 { 00:14:37.110 "name": "BaseBdev3", 00:14:37.110 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:37.110 "is_configured": true, 00:14:37.110 "data_offset": 2048, 00:14:37.110 "data_size": 63488 00:14:37.110 }, 00:14:37.110 { 00:14:37.110 "name": "BaseBdev4", 00:14:37.110 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:37.110 "is_configured": true, 00:14:37.110 "data_offset": 2048, 00:14:37.110 "data_size": 63488 00:14:37.110 } 00:14:37.110 ] 00:14:37.110 }' 00:14:37.110 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.370 [2024-11-08 16:56:06.724917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:37.370 [2024-11-08 16:56:06.725002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.370 [2024-11-08 16:56:06.725033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:37.370 [2024-11-08 16:56:06.725045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.370 [2024-11-08 16:56:06.725557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.370 [2024-11-08 16:56:06.725592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.370 [2024-11-08 16:56:06.725701] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:37.370 [2024-11-08 16:56:06.725717] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:37.370 [2024-11-08 16:56:06.725728] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.370 [2024-11-08 16:56:06.725740] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:37.370 BaseBdev1 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.370 16:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.306 "name": "raid_bdev1", 00:14:38.306 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:38.306 "strip_size_kb": 0, 00:14:38.306 "state": "online", 00:14:38.306 "raid_level": "raid1", 00:14:38.306 "superblock": true, 00:14:38.306 "num_base_bdevs": 4, 00:14:38.306 "num_base_bdevs_discovered": 2, 00:14:38.306 "num_base_bdevs_operational": 2, 00:14:38.306 "base_bdevs_list": [ 00:14:38.306 { 00:14:38.306 "name": null, 00:14:38.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.306 "is_configured": false, 00:14:38.306 "data_offset": 0, 00:14:38.306 "data_size": 63488 00:14:38.306 }, 00:14:38.306 { 00:14:38.306 "name": null, 00:14:38.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.306 "is_configured": false, 00:14:38.306 "data_offset": 2048, 00:14:38.306 "data_size": 63488 00:14:38.306 }, 00:14:38.306 { 00:14:38.306 "name": "BaseBdev3", 00:14:38.306 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:38.306 "is_configured": true, 00:14:38.306 "data_offset": 2048, 00:14:38.306 "data_size": 63488 00:14:38.306 }, 00:14:38.306 { 00:14:38.306 "name": "BaseBdev4", 00:14:38.306 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:38.306 "is_configured": true, 00:14:38.306 "data_offset": 2048, 00:14:38.306 "data_size": 63488 00:14:38.306 } 00:14:38.306 ] 00:14:38.306 }' 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.306 16:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.876 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.877 "name": "raid_bdev1", 00:14:38.877 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:38.877 "strip_size_kb": 0, 00:14:38.877 "state": "online", 00:14:38.877 "raid_level": "raid1", 00:14:38.877 "superblock": true, 00:14:38.877 "num_base_bdevs": 4, 00:14:38.877 "num_base_bdevs_discovered": 2, 00:14:38.877 "num_base_bdevs_operational": 2, 00:14:38.877 "base_bdevs_list": [ 00:14:38.877 { 00:14:38.877 "name": null, 00:14:38.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.877 "is_configured": false, 00:14:38.877 "data_offset": 0, 00:14:38.877 "data_size": 63488 00:14:38.877 }, 00:14:38.877 { 00:14:38.877 "name": null, 00:14:38.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.877 "is_configured": false, 00:14:38.877 "data_offset": 2048, 00:14:38.877 "data_size": 63488 00:14:38.877 }, 00:14:38.877 { 00:14:38.877 "name": "BaseBdev3", 00:14:38.877 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:38.877 "is_configured": true, 00:14:38.877 "data_offset": 2048, 00:14:38.877 "data_size": 63488 00:14:38.877 }, 00:14:38.877 { 00:14:38.877 "name": "BaseBdev4", 00:14:38.877 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:38.877 "is_configured": true, 00:14:38.877 "data_offset": 2048, 00:14:38.877 "data_size": 63488 00:14:38.877 } 00:14:38.877 ] 00:14:38.877 }' 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.877 [2024-11-08 16:56:08.378251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.877 [2024-11-08 16:56:08.378514] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.877 [2024-11-08 16:56:08.378586] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:38.877 request: 00:14:38.877 { 00:14:38.877 "base_bdev": "BaseBdev1", 00:14:38.877 "raid_bdev": "raid_bdev1", 00:14:38.877 "method": "bdev_raid_add_base_bdev", 00:14:38.877 "req_id": 1 00:14:38.877 } 00:14:38.877 Got JSON-RPC error response 00:14:38.877 response: 00:14:38.877 { 00:14:38.877 "code": -22, 00:14:38.877 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:38.877 } 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.877 16:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.266 "name": "raid_bdev1", 00:14:40.266 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:40.266 "strip_size_kb": 0, 00:14:40.266 "state": "online", 00:14:40.266 "raid_level": "raid1", 00:14:40.266 "superblock": true, 00:14:40.266 "num_base_bdevs": 4, 00:14:40.266 "num_base_bdevs_discovered": 2, 00:14:40.266 "num_base_bdevs_operational": 2, 00:14:40.266 "base_bdevs_list": [ 00:14:40.266 { 00:14:40.266 "name": null, 00:14:40.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.266 "is_configured": false, 00:14:40.266 "data_offset": 0, 00:14:40.266 "data_size": 63488 00:14:40.266 }, 00:14:40.266 { 00:14:40.266 "name": null, 00:14:40.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.266 "is_configured": false, 00:14:40.266 "data_offset": 2048, 00:14:40.266 "data_size": 63488 00:14:40.266 }, 00:14:40.266 { 00:14:40.266 "name": "BaseBdev3", 00:14:40.266 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:40.266 "is_configured": true, 00:14:40.266 "data_offset": 2048, 00:14:40.266 "data_size": 63488 00:14:40.266 }, 00:14:40.266 { 00:14:40.266 "name": "BaseBdev4", 00:14:40.266 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:40.266 "is_configured": true, 00:14:40.266 "data_offset": 2048, 00:14:40.266 "data_size": 63488 00:14:40.266 } 00:14:40.266 ] 00:14:40.266 }' 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.266 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.526 "name": "raid_bdev1", 00:14:40.526 "uuid": "2df4cf74-a0b0-4544-8a00-3ff1688dacce", 00:14:40.526 "strip_size_kb": 0, 00:14:40.526 "state": "online", 00:14:40.526 "raid_level": "raid1", 00:14:40.526 "superblock": true, 00:14:40.526 "num_base_bdevs": 4, 00:14:40.526 "num_base_bdevs_discovered": 2, 00:14:40.526 "num_base_bdevs_operational": 2, 00:14:40.526 "base_bdevs_list": [ 00:14:40.526 { 00:14:40.526 "name": null, 00:14:40.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.526 "is_configured": false, 00:14:40.526 "data_offset": 0, 00:14:40.526 "data_size": 63488 00:14:40.526 }, 00:14:40.526 { 00:14:40.526 "name": null, 00:14:40.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.526 "is_configured": false, 00:14:40.526 "data_offset": 2048, 00:14:40.526 "data_size": 63488 00:14:40.526 }, 00:14:40.526 { 00:14:40.526 "name": "BaseBdev3", 00:14:40.526 "uuid": "57e925a4-e55f-5fcb-bc74-3c5df5219cab", 00:14:40.526 "is_configured": true, 00:14:40.526 "data_offset": 2048, 00:14:40.526 "data_size": 63488 00:14:40.526 }, 00:14:40.526 { 00:14:40.526 "name": "BaseBdev4", 00:14:40.526 "uuid": "e820fc26-00de-5b60-9e30-0887e775ec2c", 00:14:40.526 "is_configured": true, 00:14:40.526 "data_offset": 2048, 00:14:40.526 "data_size": 63488 00:14:40.526 } 00:14:40.526 ] 00:14:40.526 }' 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88679 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88679 ']' 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88679 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.526 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88679 00:14:40.526 killing process with pid 88679 00:14:40.526 Received shutdown signal, test time was about 60.000000 seconds 00:14:40.526 00:14:40.526 Latency(us) 00:14:40.526 [2024-11-08T16:56:10.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.526 [2024-11-08T16:56:10.054Z] =================================================================================================================== 00:14:40.527 [2024-11-08T16:56:10.055Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:40.527 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.527 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.527 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88679' 00:14:40.527 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88679 00:14:40.527 16:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88679 00:14:40.527 [2024-11-08 16:56:09.980854] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.527 [2024-11-08 16:56:09.981001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.527 [2024-11-08 16:56:09.981085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.527 [2024-11-08 16:56:09.981100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:40.527 [2024-11-08 16:56:10.036196] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.785 ************************************ 00:14:40.785 END TEST raid_rebuild_test_sb 00:14:40.785 ************************************ 00:14:40.785 16:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:40.785 00:14:40.785 real 0m24.570s 00:14:40.785 user 0m30.343s 00:14:40.785 sys 0m3.856s 00:14:40.785 16:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.785 16:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.045 16:56:10 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:41.045 16:56:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:41.045 16:56:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.045 16:56:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.045 ************************************ 00:14:41.045 START TEST raid_rebuild_test_io 00:14:41.045 ************************************ 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89426 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89426 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89426 ']' 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.045 16:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.045 [2024-11-08 16:56:10.472554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:41.045 [2024-11-08 16:56:10.472837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89426 ] 00:14:41.045 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:41.045 Zero copy mechanism will not be used. 00:14:41.305 [2024-11-08 16:56:10.641808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.305 [2024-11-08 16:56:10.697160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.305 [2024-11-08 16:56:10.749948] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.305 [2024-11-08 16:56:10.750113] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.872 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.872 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:41.872 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:41.872 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:41.872 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.873 BaseBdev1_malloc 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.873 [2024-11-08 16:56:11.384264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:41.873 [2024-11-08 16:56:11.384427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.873 [2024-11-08 16:56:11.384467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:41.873 [2024-11-08 16:56:11.384494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.873 [2024-11-08 16:56:11.387093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.873 [2024-11-08 16:56:11.387138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:41.873 BaseBdev1 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.873 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.132 BaseBdev2_malloc 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.132 [2024-11-08 16:56:11.421882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:42.132 [2024-11-08 16:56:11.421969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.132 [2024-11-08 16:56:11.422001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:42.132 [2024-11-08 16:56:11.422015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.132 [2024-11-08 16:56:11.424820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.132 [2024-11-08 16:56:11.424950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:42.132 BaseBdev2 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.132 BaseBdev3_malloc 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.132 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.132 [2024-11-08 16:56:11.447454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:42.132 [2024-11-08 16:56:11.447522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.132 [2024-11-08 16:56:11.447554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:42.132 [2024-11-08 16:56:11.447565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.132 [2024-11-08 16:56:11.449968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.132 [2024-11-08 16:56:11.450011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:42.132 BaseBdev3 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 BaseBdev4_malloc 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 [2024-11-08 16:56:11.476655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:42.133 [2024-11-08 16:56:11.476723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.133 [2024-11-08 16:56:11.476751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:42.133 [2024-11-08 16:56:11.476760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.133 [2024-11-08 16:56:11.479227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.133 [2024-11-08 16:56:11.479334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:42.133 BaseBdev4 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 spare_malloc 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 spare_delay 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 [2024-11-08 16:56:11.518263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.133 [2024-11-08 16:56:11.518332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.133 [2024-11-08 16:56:11.518361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:42.133 [2024-11-08 16:56:11.518371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.133 [2024-11-08 16:56:11.520781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.133 [2024-11-08 16:56:11.520871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.133 spare 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 [2024-11-08 16:56:11.530326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.133 [2024-11-08 16:56:11.532365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.133 [2024-11-08 16:56:11.532505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.133 [2024-11-08 16:56:11.532563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.133 [2024-11-08 16:56:11.532679] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:42.133 [2024-11-08 16:56:11.532692] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:42.133 [2024-11-08 16:56:11.533005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:42.133 [2024-11-08 16:56:11.533184] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:42.133 [2024-11-08 16:56:11.533204] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:42.133 [2024-11-08 16:56:11.533363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.133 "name": "raid_bdev1", 00:14:42.133 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:42.133 "strip_size_kb": 0, 00:14:42.133 "state": "online", 00:14:42.133 "raid_level": "raid1", 00:14:42.133 "superblock": false, 00:14:42.133 "num_base_bdevs": 4, 00:14:42.133 "num_base_bdevs_discovered": 4, 00:14:42.133 "num_base_bdevs_operational": 4, 00:14:42.133 "base_bdevs_list": [ 00:14:42.133 { 00:14:42.133 "name": "BaseBdev1", 00:14:42.133 "uuid": "3d5da113-b258-5ca6-98a7-ce0b19b7f8a6", 00:14:42.133 "is_configured": true, 00:14:42.133 "data_offset": 0, 00:14:42.133 "data_size": 65536 00:14:42.133 }, 00:14:42.133 { 00:14:42.133 "name": "BaseBdev2", 00:14:42.133 "uuid": "0ccff546-92af-5335-9086-ef75dda7751c", 00:14:42.133 "is_configured": true, 00:14:42.133 "data_offset": 0, 00:14:42.133 "data_size": 65536 00:14:42.133 }, 00:14:42.133 { 00:14:42.133 "name": "BaseBdev3", 00:14:42.133 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:42.133 "is_configured": true, 00:14:42.133 "data_offset": 0, 00:14:42.133 "data_size": 65536 00:14:42.133 }, 00:14:42.133 { 00:14:42.133 "name": "BaseBdev4", 00:14:42.133 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:42.133 "is_configured": true, 00:14:42.133 "data_offset": 0, 00:14:42.133 "data_size": 65536 00:14:42.133 } 00:14:42.133 ] 00:14:42.133 }' 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.133 16:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 [2024-11-08 16:56:12.009992] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 [2024-11-08 16:56:12.109391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.704 "name": "raid_bdev1", 00:14:42.704 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:42.704 "strip_size_kb": 0, 00:14:42.704 "state": "online", 00:14:42.704 "raid_level": "raid1", 00:14:42.704 "superblock": false, 00:14:42.704 "num_base_bdevs": 4, 00:14:42.704 "num_base_bdevs_discovered": 3, 00:14:42.704 "num_base_bdevs_operational": 3, 00:14:42.704 "base_bdevs_list": [ 00:14:42.704 { 00:14:42.704 "name": null, 00:14:42.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.704 "is_configured": false, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 }, 00:14:42.704 { 00:14:42.704 "name": "BaseBdev2", 00:14:42.704 "uuid": "0ccff546-92af-5335-9086-ef75dda7751c", 00:14:42.704 "is_configured": true, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 }, 00:14:42.704 { 00:14:42.704 "name": "BaseBdev3", 00:14:42.704 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:42.704 "is_configured": true, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 }, 00:14:42.704 { 00:14:42.704 "name": "BaseBdev4", 00:14:42.704 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:42.704 "is_configured": true, 00:14:42.704 "data_offset": 0, 00:14:42.704 "data_size": 65536 00:14:42.704 } 00:14:42.704 ] 00:14:42.704 }' 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.704 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 [2024-11-08 16:56:12.211385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:42.705 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:42.705 Zero copy mechanism will not be used. 00:14:42.705 Running I/O for 60 seconds... 00:14:43.272 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.272 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.272 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.272 [2024-11-08 16:56:12.570023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.272 16:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.272 16:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:43.272 [2024-11-08 16:56:12.604967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:43.272 [2024-11-08 16:56:12.607281] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.272 [2024-11-08 16:56:12.757468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:43.531 [2024-11-08 16:56:12.896934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:43.790 [2024-11-08 16:56:13.147855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:43.790 [2024-11-08 16:56:13.149297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:44.050 196.00 IOPS, 588.00 MiB/s [2024-11-08T16:56:13.578Z] [2024-11-08 16:56:13.351333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:44.050 [2024-11-08 16:56:13.351733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:44.309 [2024-11-08 16:56:13.598338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.309 [2024-11-08 16:56:13.605907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.309 "name": "raid_bdev1", 00:14:44.309 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:44.309 "strip_size_kb": 0, 00:14:44.309 "state": "online", 00:14:44.309 "raid_level": "raid1", 00:14:44.309 "superblock": false, 00:14:44.309 "num_base_bdevs": 4, 00:14:44.309 "num_base_bdevs_discovered": 4, 00:14:44.309 "num_base_bdevs_operational": 4, 00:14:44.309 "process": { 00:14:44.309 "type": "rebuild", 00:14:44.309 "target": "spare", 00:14:44.309 "progress": { 00:14:44.309 "blocks": 14336, 00:14:44.309 "percent": 21 00:14:44.309 } 00:14:44.309 }, 00:14:44.309 "base_bdevs_list": [ 00:14:44.309 { 00:14:44.309 "name": "spare", 00:14:44.309 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:44.309 "is_configured": true, 00:14:44.309 "data_offset": 0, 00:14:44.309 "data_size": 65536 00:14:44.309 }, 00:14:44.309 { 00:14:44.309 "name": "BaseBdev2", 00:14:44.309 "uuid": "0ccff546-92af-5335-9086-ef75dda7751c", 00:14:44.309 "is_configured": true, 00:14:44.309 "data_offset": 0, 00:14:44.309 "data_size": 65536 00:14:44.309 }, 00:14:44.309 { 00:14:44.309 "name": "BaseBdev3", 00:14:44.309 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:44.309 "is_configured": true, 00:14:44.309 "data_offset": 0, 00:14:44.309 "data_size": 65536 00:14:44.309 }, 00:14:44.309 { 00:14:44.309 "name": "BaseBdev4", 00:14:44.309 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:44.309 "is_configured": true, 00:14:44.309 "data_offset": 0, 00:14:44.309 "data_size": 65536 00:14:44.309 } 00:14:44.309 ] 00:14:44.309 }' 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.309 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.309 [2024-11-08 16:56:13.759557] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.309 [2024-11-08 16:56:13.828303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:44.309 [2024-11-08 16:56:13.828724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:44.568 [2024-11-08 16:56:13.931778] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.568 [2024-11-08 16:56:13.943204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.568 [2024-11-08 16:56:13.943403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.568 [2024-11-08 16:56:13.943442] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.568 [2024-11-08 16:56:13.965051] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.568 16:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.568 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.568 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.568 "name": "raid_bdev1", 00:14:44.568 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:44.568 "strip_size_kb": 0, 00:14:44.568 "state": "online", 00:14:44.568 "raid_level": "raid1", 00:14:44.568 "superblock": false, 00:14:44.568 "num_base_bdevs": 4, 00:14:44.568 "num_base_bdevs_discovered": 3, 00:14:44.568 "num_base_bdevs_operational": 3, 00:14:44.568 "base_bdevs_list": [ 00:14:44.568 { 00:14:44.568 "name": null, 00:14:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.568 "is_configured": false, 00:14:44.568 "data_offset": 0, 00:14:44.568 "data_size": 65536 00:14:44.568 }, 00:14:44.568 { 00:14:44.568 "name": "BaseBdev2", 00:14:44.568 "uuid": "0ccff546-92af-5335-9086-ef75dda7751c", 00:14:44.568 "is_configured": true, 00:14:44.568 "data_offset": 0, 00:14:44.568 "data_size": 65536 00:14:44.568 }, 00:14:44.568 { 00:14:44.568 "name": "BaseBdev3", 00:14:44.568 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:44.568 "is_configured": true, 00:14:44.568 "data_offset": 0, 00:14:44.568 "data_size": 65536 00:14:44.568 }, 00:14:44.568 { 00:14:44.568 "name": "BaseBdev4", 00:14:44.568 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:44.568 "is_configured": true, 00:14:44.568 "data_offset": 0, 00:14:44.568 "data_size": 65536 00:14:44.568 } 00:14:44.568 ] 00:14:44.568 }' 00:14:44.568 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.568 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 139.00 IOPS, 417.00 MiB/s [2024-11-08T16:56:14.613Z] 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.085 "name": "raid_bdev1", 00:14:45.085 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:45.085 "strip_size_kb": 0, 00:14:45.085 "state": "online", 00:14:45.085 "raid_level": "raid1", 00:14:45.085 "superblock": false, 00:14:45.085 "num_base_bdevs": 4, 00:14:45.085 "num_base_bdevs_discovered": 3, 00:14:45.085 "num_base_bdevs_operational": 3, 00:14:45.085 "base_bdevs_list": [ 00:14:45.085 { 00:14:45.085 "name": null, 00:14:45.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.085 "is_configured": false, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "name": "BaseBdev2", 00:14:45.085 "uuid": "0ccff546-92af-5335-9086-ef75dda7751c", 00:14:45.085 "is_configured": true, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "name": "BaseBdev3", 00:14:45.085 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:45.085 "is_configured": true, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "name": "BaseBdev4", 00:14:45.085 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:45.085 "is_configured": true, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 } 00:14:45.085 ] 00:14:45.085 }' 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.085 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.343 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.343 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.343 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.343 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.343 [2024-11-08 16:56:14.626209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.343 16:56:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.343 16:56:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:45.343 [2024-11-08 16:56:14.696408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:45.343 [2024-11-08 16:56:14.698897] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.343 [2024-11-08 16:56:14.817855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.343 [2024-11-08 16:56:14.818452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.601 [2024-11-08 16:56:14.967978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.116 138.67 IOPS, 416.00 MiB/s [2024-11-08T16:56:15.644Z] [2024-11-08 16:56:15.423928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.375 "name": "raid_bdev1", 00:14:46.375 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:46.375 "strip_size_kb": 0, 00:14:46.375 "state": "online", 00:14:46.375 "raid_level": "raid1", 00:14:46.375 "superblock": false, 00:14:46.375 "num_base_bdevs": 4, 00:14:46.375 "num_base_bdevs_discovered": 4, 00:14:46.375 "num_base_bdevs_operational": 4, 00:14:46.375 "process": { 00:14:46.375 "type": "rebuild", 00:14:46.375 "target": "spare", 00:14:46.375 "progress": { 00:14:46.375 "blocks": 12288, 00:14:46.375 "percent": 18 00:14:46.375 } 00:14:46.375 }, 00:14:46.375 "base_bdevs_list": [ 00:14:46.375 { 00:14:46.375 "name": "spare", 00:14:46.375 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:46.375 "is_configured": true, 00:14:46.375 "data_offset": 0, 00:14:46.375 "data_size": 65536 00:14:46.375 }, 00:14:46.375 { 00:14:46.375 "name": "BaseBdev2", 00:14:46.375 "uuid": "0ccff546-92af-5335-9086-ef75dda7751c", 00:14:46.375 "is_configured": true, 00:14:46.375 "data_offset": 0, 00:14:46.375 "data_size": 65536 00:14:46.375 }, 00:14:46.375 { 00:14:46.375 "name": "BaseBdev3", 00:14:46.375 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:46.375 "is_configured": true, 00:14:46.375 "data_offset": 0, 00:14:46.375 "data_size": 65536 00:14:46.375 }, 00:14:46.375 { 00:14:46.375 "name": "BaseBdev4", 00:14:46.375 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:46.375 "is_configured": true, 00:14:46.375 "data_offset": 0, 00:14:46.375 "data_size": 65536 00:14:46.375 } 00:14:46.375 ] 00:14:46.375 }' 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.375 [2024-11-08 16:56:15.794219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.375 16:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.376 [2024-11-08 16:56:15.822187] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.635 [2024-11-08 16:56:16.143907] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:14:46.635 [2024-11-08 16:56:16.144039] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.635 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.896 "name": "raid_bdev1", 00:14:46.896 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:46.896 "strip_size_kb": 0, 00:14:46.896 "state": "online", 00:14:46.896 "raid_level": "raid1", 00:14:46.896 "superblock": false, 00:14:46.896 "num_base_bdevs": 4, 00:14:46.896 "num_base_bdevs_discovered": 3, 00:14:46.896 "num_base_bdevs_operational": 3, 00:14:46.896 "process": { 00:14:46.896 "type": "rebuild", 00:14:46.896 "target": "spare", 00:14:46.896 "progress": { 00:14:46.896 "blocks": 18432, 00:14:46.896 "percent": 28 00:14:46.896 } 00:14:46.896 }, 00:14:46.896 "base_bdevs_list": [ 00:14:46.896 { 00:14:46.896 "name": "spare", 00:14:46.896 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:46.896 "is_configured": true, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "name": null, 00:14:46.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.896 "is_configured": false, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "name": "BaseBdev3", 00:14:46.896 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:46.896 "is_configured": true, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "name": "BaseBdev4", 00:14:46.896 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:46.896 "is_configured": true, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 } 00:14:46.896 ] 00:14:46.896 }' 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.896 125.00 IOPS, 375.00 MiB/s [2024-11-08T16:56:16.424Z] 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=401 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.896 "name": "raid_bdev1", 00:14:46.896 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:46.896 "strip_size_kb": 0, 00:14:46.896 "state": "online", 00:14:46.896 "raid_level": "raid1", 00:14:46.896 "superblock": false, 00:14:46.896 "num_base_bdevs": 4, 00:14:46.896 "num_base_bdevs_discovered": 3, 00:14:46.896 "num_base_bdevs_operational": 3, 00:14:46.896 "process": { 00:14:46.896 "type": "rebuild", 00:14:46.896 "target": "spare", 00:14:46.896 "progress": { 00:14:46.896 "blocks": 20480, 00:14:46.896 "percent": 31 00:14:46.896 } 00:14:46.896 }, 00:14:46.896 "base_bdevs_list": [ 00:14:46.896 { 00:14:46.896 "name": "spare", 00:14:46.896 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:46.896 "is_configured": true, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "name": null, 00:14:46.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.896 "is_configured": false, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "name": "BaseBdev3", 00:14:46.896 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:46.896 "is_configured": true, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "name": "BaseBdev4", 00:14:46.896 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:46.896 "is_configured": true, 00:14:46.896 "data_offset": 0, 00:14:46.896 "data_size": 65536 00:14:46.896 } 00:14:46.896 ] 00:14:46.896 }' 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.896 [2024-11-08 16:56:16.376220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:46.896 [2024-11-08 16:56:16.376560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:46.896 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.897 16:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.163 [2024-11-08 16:56:16.669779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:47.421 [2024-11-08 16:56:16.874868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:47.986 110.60 IOPS, 331.80 MiB/s [2024-11-08T16:56:17.514Z] [2024-11-08 16:56:17.269377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.986 "name": "raid_bdev1", 00:14:47.986 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:47.986 "strip_size_kb": 0, 00:14:47.986 "state": "online", 00:14:47.986 "raid_level": "raid1", 00:14:47.986 "superblock": false, 00:14:47.986 "num_base_bdevs": 4, 00:14:47.986 "num_base_bdevs_discovered": 3, 00:14:47.986 "num_base_bdevs_operational": 3, 00:14:47.986 "process": { 00:14:47.986 "type": "rebuild", 00:14:47.986 "target": "spare", 00:14:47.986 "progress": { 00:14:47.986 "blocks": 34816, 00:14:47.986 "percent": 53 00:14:47.986 } 00:14:47.986 }, 00:14:47.986 "base_bdevs_list": [ 00:14:47.986 { 00:14:47.986 "name": "spare", 00:14:47.986 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:47.986 "is_configured": true, 00:14:47.986 "data_offset": 0, 00:14:47.986 "data_size": 65536 00:14:47.986 }, 00:14:47.986 { 00:14:47.986 "name": null, 00:14:47.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.986 "is_configured": false, 00:14:47.986 "data_offset": 0, 00:14:47.986 "data_size": 65536 00:14:47.986 }, 00:14:47.986 { 00:14:47.986 "name": "BaseBdev3", 00:14:47.986 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:47.986 "is_configured": true, 00:14:47.986 "data_offset": 0, 00:14:47.986 "data_size": 65536 00:14:47.986 }, 00:14:47.986 { 00:14:47.986 "name": "BaseBdev4", 00:14:47.986 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:47.986 "is_configured": true, 00:14:47.986 "data_offset": 0, 00:14:47.986 "data_size": 65536 00:14:47.986 } 00:14:47.986 ] 00:14:47.986 }' 00:14:47.986 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.244 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.244 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.244 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.244 16:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.244 [2024-11-08 16:56:17.599780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:48.503 [2024-11-08 16:56:17.948250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:48.762 [2024-11-08 16:56:18.057054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:48.762 [2024-11-08 16:56:18.057775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:49.331 100.67 IOPS, 302.00 MiB/s [2024-11-08T16:56:18.859Z] 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.331 "name": "raid_bdev1", 00:14:49.331 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:49.331 "strip_size_kb": 0, 00:14:49.331 "state": "online", 00:14:49.331 "raid_level": "raid1", 00:14:49.331 "superblock": false, 00:14:49.331 "num_base_bdevs": 4, 00:14:49.331 "num_base_bdevs_discovered": 3, 00:14:49.331 "num_base_bdevs_operational": 3, 00:14:49.331 "process": { 00:14:49.331 "type": "rebuild", 00:14:49.331 "target": "spare", 00:14:49.331 "progress": { 00:14:49.331 "blocks": 53248, 00:14:49.331 "percent": 81 00:14:49.331 } 00:14:49.331 }, 00:14:49.331 "base_bdevs_list": [ 00:14:49.331 { 00:14:49.331 "name": "spare", 00:14:49.331 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:49.331 "is_configured": true, 00:14:49.331 "data_offset": 0, 00:14:49.331 "data_size": 65536 00:14:49.331 }, 00:14:49.331 { 00:14:49.331 "name": null, 00:14:49.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.331 "is_configured": false, 00:14:49.331 "data_offset": 0, 00:14:49.331 "data_size": 65536 00:14:49.331 }, 00:14:49.331 { 00:14:49.331 "name": "BaseBdev3", 00:14:49.331 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:49.331 "is_configured": true, 00:14:49.331 "data_offset": 0, 00:14:49.331 "data_size": 65536 00:14:49.331 }, 00:14:49.331 { 00:14:49.331 "name": "BaseBdev4", 00:14:49.331 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:49.331 "is_configured": true, 00:14:49.331 "data_offset": 0, 00:14:49.331 "data_size": 65536 00:14:49.331 } 00:14:49.331 ] 00:14:49.331 }' 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.331 16:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.631 [2024-11-08 16:56:19.155740] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.889 89.86 IOPS, 269.57 MiB/s [2024-11-08T16:56:19.417Z] [2024-11-08 16:56:19.255553] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.889 [2024-11-08 16:56:19.258643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.456 "name": "raid_bdev1", 00:14:50.456 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:50.456 "strip_size_kb": 0, 00:14:50.456 "state": "online", 00:14:50.456 "raid_level": "raid1", 00:14:50.456 "superblock": false, 00:14:50.456 "num_base_bdevs": 4, 00:14:50.456 "num_base_bdevs_discovered": 3, 00:14:50.456 "num_base_bdevs_operational": 3, 00:14:50.456 "base_bdevs_list": [ 00:14:50.456 { 00:14:50.456 "name": "spare", 00:14:50.456 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:50.456 "is_configured": true, 00:14:50.456 "data_offset": 0, 00:14:50.456 "data_size": 65536 00:14:50.456 }, 00:14:50.456 { 00:14:50.456 "name": null, 00:14:50.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.456 "is_configured": false, 00:14:50.456 "data_offset": 0, 00:14:50.456 "data_size": 65536 00:14:50.456 }, 00:14:50.456 { 00:14:50.456 "name": "BaseBdev3", 00:14:50.456 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:50.456 "is_configured": true, 00:14:50.456 "data_offset": 0, 00:14:50.456 "data_size": 65536 00:14:50.456 }, 00:14:50.456 { 00:14:50.456 "name": "BaseBdev4", 00:14:50.456 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:50.456 "is_configured": true, 00:14:50.456 "data_offset": 0, 00:14:50.456 "data_size": 65536 00:14:50.456 } 00:14:50.456 ] 00:14:50.456 }' 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.456 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.456 "name": "raid_bdev1", 00:14:50.456 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:50.456 "strip_size_kb": 0, 00:14:50.456 "state": "online", 00:14:50.456 "raid_level": "raid1", 00:14:50.456 "superblock": false, 00:14:50.456 "num_base_bdevs": 4, 00:14:50.456 "num_base_bdevs_discovered": 3, 00:14:50.456 "num_base_bdevs_operational": 3, 00:14:50.456 "base_bdevs_list": [ 00:14:50.456 { 00:14:50.456 "name": "spare", 00:14:50.456 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:50.456 "is_configured": true, 00:14:50.456 "data_offset": 0, 00:14:50.456 "data_size": 65536 00:14:50.456 }, 00:14:50.456 { 00:14:50.456 "name": null, 00:14:50.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.456 "is_configured": false, 00:14:50.456 "data_offset": 0, 00:14:50.456 "data_size": 65536 00:14:50.456 }, 00:14:50.456 { 00:14:50.457 "name": "BaseBdev3", 00:14:50.457 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:50.457 "is_configured": true, 00:14:50.457 "data_offset": 0, 00:14:50.457 "data_size": 65536 00:14:50.457 }, 00:14:50.457 { 00:14:50.457 "name": "BaseBdev4", 00:14:50.457 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:50.457 "is_configured": true, 00:14:50.457 "data_offset": 0, 00:14:50.457 "data_size": 65536 00:14:50.457 } 00:14:50.457 ] 00:14:50.457 }' 00:14:50.457 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.457 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.457 16:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.715 "name": "raid_bdev1", 00:14:50.715 "uuid": "9c56b505-44cd-4ff3-a022-d71e616c5b95", 00:14:50.715 "strip_size_kb": 0, 00:14:50.715 "state": "online", 00:14:50.715 "raid_level": "raid1", 00:14:50.715 "superblock": false, 00:14:50.715 "num_base_bdevs": 4, 00:14:50.715 "num_base_bdevs_discovered": 3, 00:14:50.715 "num_base_bdevs_operational": 3, 00:14:50.715 "base_bdevs_list": [ 00:14:50.715 { 00:14:50.715 "name": "spare", 00:14:50.715 "uuid": "99343db5-6813-5992-b2b7-9dfea65245f1", 00:14:50.715 "is_configured": true, 00:14:50.715 "data_offset": 0, 00:14:50.715 "data_size": 65536 00:14:50.715 }, 00:14:50.715 { 00:14:50.715 "name": null, 00:14:50.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.715 "is_configured": false, 00:14:50.715 "data_offset": 0, 00:14:50.715 "data_size": 65536 00:14:50.715 }, 00:14:50.715 { 00:14:50.715 "name": "BaseBdev3", 00:14:50.715 "uuid": "9c29ac64-d831-55cd-a5bc-1399805c479a", 00:14:50.715 "is_configured": true, 00:14:50.715 "data_offset": 0, 00:14:50.715 "data_size": 65536 00:14:50.715 }, 00:14:50.715 { 00:14:50.715 "name": "BaseBdev4", 00:14:50.715 "uuid": "a1b63e55-a218-5975-a8d6-122f10bac51b", 00:14:50.715 "is_configured": true, 00:14:50.715 "data_offset": 0, 00:14:50.715 "data_size": 65536 00:14:50.715 } 00:14:50.715 ] 00:14:50.715 }' 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.715 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.974 83.25 IOPS, 249.75 MiB/s [2024-11-08T16:56:20.502Z] 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.974 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.974 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.974 [2024-11-08 16:56:20.453519] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.974 [2024-11-08 16:56:20.453684] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.233 00:14:51.233 Latency(us) 00:14:51.233 [2024-11-08T16:56:20.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.233 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:51.233 raid_bdev1 : 8.36 81.65 244.95 0.00 0.00 16870.59 414.97 114015.47 00:14:51.233 [2024-11-08T16:56:20.761Z] =================================================================================================================== 00:14:51.233 [2024-11-08T16:56:20.761Z] Total : 81.65 244.95 0.00 0.00 16870.59 414.97 114015.47 00:14:51.233 [2024-11-08 16:56:20.567406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.233 [2024-11-08 16:56:20.567516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.233 [2024-11-08 16:56:20.567714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.233 [2024-11-08 16:56:20.567788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:51.233 { 00:14:51.233 "results": [ 00:14:51.233 { 00:14:51.233 "job": "raid_bdev1", 00:14:51.233 "core_mask": "0x1", 00:14:51.233 "workload": "randrw", 00:14:51.233 "percentage": 50, 00:14:51.233 "status": "finished", 00:14:51.233 "queue_depth": 2, 00:14:51.233 "io_size": 3145728, 00:14:51.233 "runtime": 8.364828, 00:14:51.233 "iops": 81.65140992737687, 00:14:51.233 "mibps": 244.95422978213062, 00:14:51.233 "io_failed": 0, 00:14:51.233 "io_timeout": 0, 00:14:51.233 "avg_latency_us": 16870.59428286458, 00:14:51.233 "min_latency_us": 414.9659388646288, 00:14:51.233 "max_latency_us": 114015.46899563319 00:14:51.233 } 00:14:51.233 ], 00:14:51.233 "core_count": 1 00:14:51.233 } 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.233 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:51.492 /dev/nbd0 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.492 1+0 records in 00:14:51.492 1+0 records out 00:14:51.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445249 s, 9.2 MB/s 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.492 16:56:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:51.751 /dev/nbd1 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.751 1+0 records in 00:14:51.751 1+0 records out 00:14:51.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377797 s, 10.8 MB/s 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:51.751 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.008 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.009 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:52.267 /dev/nbd1 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.267 1+0 records in 00:14:52.267 1+0 records out 00:14:52.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316187 s, 13.0 MB/s 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:52.267 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.526 16:56:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.783 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89426 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89426 ']' 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89426 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89426 00:14:53.042 killing process with pid 89426 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89426' 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89426 00:14:53.042 Received shutdown signal, test time was about 10.224779 seconds 00:14:53.042 00:14:53.042 Latency(us) 00:14:53.042 [2024-11-08T16:56:22.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.042 [2024-11-08T16:56:22.570Z] =================================================================================================================== 00:14:53.042 [2024-11-08T16:56:22.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.042 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89426 00:14:53.042 [2024-11-08 16:56:22.419043] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.042 [2024-11-08 16:56:22.468500] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:53.303 00:14:53.303 real 0m12.363s 00:14:53.303 user 0m16.165s 00:14:53.303 sys 0m1.800s 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 ************************************ 00:14:53.303 END TEST raid_rebuild_test_io 00:14:53.303 ************************************ 00:14:53.303 16:56:22 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:53.303 16:56:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:53.303 16:56:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.303 16:56:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 ************************************ 00:14:53.303 START TEST raid_rebuild_test_sb_io 00:14:53.303 ************************************ 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89828 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89828 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89828 ']' 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.303 16:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.564 [2024-11-08 16:56:22.896606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:53.564 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:53.564 Zero copy mechanism will not be used. 00:14:53.564 [2024-11-08 16:56:22.896990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89828 ] 00:14:53.564 [2024-11-08 16:56:23.069384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.823 [2024-11-08 16:56:23.123951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.823 [2024-11-08 16:56:23.168947] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.823 [2024-11-08 16:56:23.169101] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 BaseBdev1_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 [2024-11-08 16:56:23.833869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:54.392 [2024-11-08 16:56:23.834062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.392 [2024-11-08 16:56:23.834106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:54.392 [2024-11-08 16:56:23.834126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.392 [2024-11-08 16:56:23.836897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.392 [2024-11-08 16:56:23.837033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.392 BaseBdev1 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 BaseBdev2_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 [2024-11-08 16:56:23.868853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:54.392 [2024-11-08 16:56:23.869004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.392 [2024-11-08 16:56:23.869083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:54.392 [2024-11-08 16:56:23.869131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.392 [2024-11-08 16:56:23.872250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.392 [2024-11-08 16:56:23.872355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:54.392 BaseBdev2 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 BaseBdev3_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 [2024-11-08 16:56:23.890899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:54.392 [2024-11-08 16:56:23.891025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.392 [2024-11-08 16:56:23.891090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:54.392 [2024-11-08 16:56:23.891133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.392 [2024-11-08 16:56:23.893788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.392 [2024-11-08 16:56:23.893892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:54.392 BaseBdev3 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 BaseBdev4_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 [2024-11-08 16:56:23.912639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:54.392 [2024-11-08 16:56:23.912748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.392 [2024-11-08 16:56:23.912800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:54.392 [2024-11-08 16:56:23.912812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.392 [2024-11-08 16:56:23.915502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.392 [2024-11-08 16:56:23.915562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:54.392 BaseBdev4 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.392 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 spare_malloc 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 spare_delay 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 [2024-11-08 16:56:23.942399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.653 [2024-11-08 16:56:23.942525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.653 [2024-11-08 16:56:23.942579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:54.653 [2024-11-08 16:56:23.942627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.653 [2024-11-08 16:56:23.945310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.653 [2024-11-08 16:56:23.945406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.653 spare 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 [2024-11-08 16:56:23.950510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.653 [2024-11-08 16:56:23.952863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.653 [2024-11-08 16:56:23.953012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.653 [2024-11-08 16:56:23.953097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.653 [2024-11-08 16:56:23.953339] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:54.653 [2024-11-08 16:56:23.953401] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.653 [2024-11-08 16:56:23.953780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:54.653 [2024-11-08 16:56:23.954020] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:54.653 [2024-11-08 16:56:23.954048] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:54.653 [2024-11-08 16:56:23.954243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 16:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.653 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.653 "name": "raid_bdev1", 00:14:54.653 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:54.653 "strip_size_kb": 0, 00:14:54.653 "state": "online", 00:14:54.653 "raid_level": "raid1", 00:14:54.653 "superblock": true, 00:14:54.653 "num_base_bdevs": 4, 00:14:54.653 "num_base_bdevs_discovered": 4, 00:14:54.653 "num_base_bdevs_operational": 4, 00:14:54.653 "base_bdevs_list": [ 00:14:54.653 { 00:14:54.653 "name": "BaseBdev1", 00:14:54.653 "uuid": "692e589f-62da-5947-98ac-cc53b66ff3b7", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 2048, 00:14:54.653 "data_size": 63488 00:14:54.653 }, 00:14:54.653 { 00:14:54.653 "name": "BaseBdev2", 00:14:54.653 "uuid": "06898242-393f-51ed-bbe1-0ecb735396b4", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 2048, 00:14:54.653 "data_size": 63488 00:14:54.653 }, 00:14:54.653 { 00:14:54.653 "name": "BaseBdev3", 00:14:54.653 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 2048, 00:14:54.653 "data_size": 63488 00:14:54.653 }, 00:14:54.653 { 00:14:54.653 "name": "BaseBdev4", 00:14:54.653 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 2048, 00:14:54.653 "data_size": 63488 00:14:54.653 } 00:14:54.653 ] 00:14:54.653 }' 00:14:54.653 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.653 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.913 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.913 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.913 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.913 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:54.913 [2024-11-08 16:56:24.426124] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.913 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.173 [2024-11-08 16:56:24.509598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.173 "name": "raid_bdev1", 00:14:55.173 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:55.173 "strip_size_kb": 0, 00:14:55.173 "state": "online", 00:14:55.173 "raid_level": "raid1", 00:14:55.173 "superblock": true, 00:14:55.173 "num_base_bdevs": 4, 00:14:55.173 "num_base_bdevs_discovered": 3, 00:14:55.173 "num_base_bdevs_operational": 3, 00:14:55.173 "base_bdevs_list": [ 00:14:55.173 { 00:14:55.173 "name": null, 00:14:55.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.173 "is_configured": false, 00:14:55.173 "data_offset": 0, 00:14:55.173 "data_size": 63488 00:14:55.173 }, 00:14:55.173 { 00:14:55.173 "name": "BaseBdev2", 00:14:55.173 "uuid": "06898242-393f-51ed-bbe1-0ecb735396b4", 00:14:55.173 "is_configured": true, 00:14:55.173 "data_offset": 2048, 00:14:55.173 "data_size": 63488 00:14:55.173 }, 00:14:55.173 { 00:14:55.173 "name": "BaseBdev3", 00:14:55.173 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:55.173 "is_configured": true, 00:14:55.173 "data_offset": 2048, 00:14:55.173 "data_size": 63488 00:14:55.173 }, 00:14:55.173 { 00:14:55.173 "name": "BaseBdev4", 00:14:55.173 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:55.173 "is_configured": true, 00:14:55.173 "data_offset": 2048, 00:14:55.173 "data_size": 63488 00:14:55.173 } 00:14:55.173 ] 00:14:55.173 }' 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.173 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.173 [2024-11-08 16:56:24.603565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:55.173 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:55.173 Zero copy mechanism will not be used. 00:14:55.173 Running I/O for 60 seconds... 00:14:55.434 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.434 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.434 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.434 [2024-11-08 16:56:24.945910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.693 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.693 16:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:55.693 [2024-11-08 16:56:25.014554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:55.693 [2024-11-08 16:56:25.016942] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.693 [2024-11-08 16:56:25.125388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.693 [2024-11-08 16:56:25.126831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.953 [2024-11-08 16:56:25.346646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.953 [2024-11-08 16:56:25.347001] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:56.212 134.00 IOPS, 402.00 MiB/s [2024-11-08T16:56:25.740Z] [2024-11-08 16:56:25.726519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:56.212 [2024-11-08 16:56:25.727284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.471 16:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.730 "name": "raid_bdev1", 00:14:56.730 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:56.730 "strip_size_kb": 0, 00:14:56.730 "state": "online", 00:14:56.730 "raid_level": "raid1", 00:14:56.730 "superblock": true, 00:14:56.730 "num_base_bdevs": 4, 00:14:56.730 "num_base_bdevs_discovered": 4, 00:14:56.730 "num_base_bdevs_operational": 4, 00:14:56.730 "process": { 00:14:56.730 "type": "rebuild", 00:14:56.730 "target": "spare", 00:14:56.730 "progress": { 00:14:56.730 "blocks": 12288, 00:14:56.730 "percent": 19 00:14:56.730 } 00:14:56.730 }, 00:14:56.730 "base_bdevs_list": [ 00:14:56.730 { 00:14:56.730 "name": "spare", 00:14:56.730 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:14:56.730 "is_configured": true, 00:14:56.730 "data_offset": 2048, 00:14:56.730 "data_size": 63488 00:14:56.730 }, 00:14:56.730 { 00:14:56.730 "name": "BaseBdev2", 00:14:56.730 "uuid": "06898242-393f-51ed-bbe1-0ecb735396b4", 00:14:56.730 "is_configured": true, 00:14:56.730 "data_offset": 2048, 00:14:56.730 "data_size": 63488 00:14:56.730 }, 00:14:56.730 { 00:14:56.730 "name": "BaseBdev3", 00:14:56.730 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:56.730 "is_configured": true, 00:14:56.730 "data_offset": 2048, 00:14:56.730 "data_size": 63488 00:14:56.730 }, 00:14:56.730 { 00:14:56.730 "name": "BaseBdev4", 00:14:56.730 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:56.730 "is_configured": true, 00:14:56.730 "data_offset": 2048, 00:14:56.730 "data_size": 63488 00:14:56.730 } 00:14:56.730 ] 00:14:56.730 }' 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.730 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.730 [2024-11-08 16:56:26.130181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.730 [2024-11-08 16:56:26.237531] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:56.730 [2024-11-08 16:56:26.256431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.730 [2024-11-08 16:56:26.256624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.730 [2024-11-08 16:56:26.256673] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:56.996 [2024-11-08 16:56:26.277205] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.996 "name": "raid_bdev1", 00:14:56.996 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:56.996 "strip_size_kb": 0, 00:14:56.996 "state": "online", 00:14:56.996 "raid_level": "raid1", 00:14:56.996 "superblock": true, 00:14:56.996 "num_base_bdevs": 4, 00:14:56.996 "num_base_bdevs_discovered": 3, 00:14:56.996 "num_base_bdevs_operational": 3, 00:14:56.996 "base_bdevs_list": [ 00:14:56.996 { 00:14:56.996 "name": null, 00:14:56.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.996 "is_configured": false, 00:14:56.996 "data_offset": 0, 00:14:56.996 "data_size": 63488 00:14:56.996 }, 00:14:56.996 { 00:14:56.996 "name": "BaseBdev2", 00:14:56.996 "uuid": "06898242-393f-51ed-bbe1-0ecb735396b4", 00:14:56.996 "is_configured": true, 00:14:56.996 "data_offset": 2048, 00:14:56.996 "data_size": 63488 00:14:56.996 }, 00:14:56.996 { 00:14:56.996 "name": "BaseBdev3", 00:14:56.996 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:56.996 "is_configured": true, 00:14:56.996 "data_offset": 2048, 00:14:56.996 "data_size": 63488 00:14:56.996 }, 00:14:56.996 { 00:14:56.996 "name": "BaseBdev4", 00:14:56.996 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:56.996 "is_configured": true, 00:14:56.996 "data_offset": 2048, 00:14:56.996 "data_size": 63488 00:14:56.996 } 00:14:56.996 ] 00:14:56.996 }' 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.996 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.277 140.00 IOPS, 420.00 MiB/s [2024-11-08T16:56:26.805Z] 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.277 "name": "raid_bdev1", 00:14:57.277 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:57.277 "strip_size_kb": 0, 00:14:57.277 "state": "online", 00:14:57.277 "raid_level": "raid1", 00:14:57.277 "superblock": true, 00:14:57.277 "num_base_bdevs": 4, 00:14:57.277 "num_base_bdevs_discovered": 3, 00:14:57.277 "num_base_bdevs_operational": 3, 00:14:57.277 "base_bdevs_list": [ 00:14:57.277 { 00:14:57.277 "name": null, 00:14:57.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.277 "is_configured": false, 00:14:57.277 "data_offset": 0, 00:14:57.277 "data_size": 63488 00:14:57.277 }, 00:14:57.277 { 00:14:57.277 "name": "BaseBdev2", 00:14:57.277 "uuid": "06898242-393f-51ed-bbe1-0ecb735396b4", 00:14:57.277 "is_configured": true, 00:14:57.277 "data_offset": 2048, 00:14:57.277 "data_size": 63488 00:14:57.277 }, 00:14:57.277 { 00:14:57.277 "name": "BaseBdev3", 00:14:57.277 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:57.277 "is_configured": true, 00:14:57.277 "data_offset": 2048, 00:14:57.277 "data_size": 63488 00:14:57.277 }, 00:14:57.277 { 00:14:57.277 "name": "BaseBdev4", 00:14:57.277 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:57.277 "is_configured": true, 00:14:57.277 "data_offset": 2048, 00:14:57.277 "data_size": 63488 00:14:57.277 } 00:14:57.277 ] 00:14:57.277 }' 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.277 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.541 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.541 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.541 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.541 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.541 [2024-11-08 16:56:26.830403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.541 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.541 16:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:57.541 [2024-11-08 16:56:26.893159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:57.541 [2024-11-08 16:56:26.895420] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.542 [2024-11-08 16:56:27.005897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.542 [2024-11-08 16:56:27.007231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:57.800 [2024-11-08 16:56:27.221815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.800 [2024-11-08 16:56:27.222246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:58.367 144.67 IOPS, 434.00 MiB/s [2024-11-08T16:56:27.895Z] [2024-11-08 16:56:27.626131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.367 [2024-11-08 16:56:27.885739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:58.367 [2024-11-08 16:56:27.886459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:58.367 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.625 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.625 "name": "raid_bdev1", 00:14:58.625 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:58.625 "strip_size_kb": 0, 00:14:58.625 "state": "online", 00:14:58.625 "raid_level": "raid1", 00:14:58.625 "superblock": true, 00:14:58.626 "num_base_bdevs": 4, 00:14:58.626 "num_base_bdevs_discovered": 4, 00:14:58.626 "num_base_bdevs_operational": 4, 00:14:58.626 "process": { 00:14:58.626 "type": "rebuild", 00:14:58.626 "target": "spare", 00:14:58.626 "progress": { 00:14:58.626 "blocks": 12288, 00:14:58.626 "percent": 19 00:14:58.626 } 00:14:58.626 }, 00:14:58.626 "base_bdevs_list": [ 00:14:58.626 { 00:14:58.626 "name": "spare", 00:14:58.626 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:14:58.626 "is_configured": true, 00:14:58.626 "data_offset": 2048, 00:14:58.626 "data_size": 63488 00:14:58.626 }, 00:14:58.626 { 00:14:58.626 "name": "BaseBdev2", 00:14:58.626 "uuid": "06898242-393f-51ed-bbe1-0ecb735396b4", 00:14:58.626 "is_configured": true, 00:14:58.626 "data_offset": 2048, 00:14:58.626 "data_size": 63488 00:14:58.626 }, 00:14:58.626 { 00:14:58.626 "name": "BaseBdev3", 00:14:58.626 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:58.626 "is_configured": true, 00:14:58.626 "data_offset": 2048, 00:14:58.626 "data_size": 63488 00:14:58.626 }, 00:14:58.626 { 00:14:58.626 "name": "BaseBdev4", 00:14:58.626 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:58.626 "is_configured": true, 00:14:58.626 "data_offset": 2048, 00:14:58.626 "data_size": 63488 00:14:58.626 } 00:14:58.626 ] 00:14:58.626 }' 00:14:58.626 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.626 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.626 16:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:58.626 [2024-11-08 16:56:28.017364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:58.626 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:58.626 [2024-11-08 16:56:28.018101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.626 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.626 [2024-11-08 16:56:28.020773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.884 [2024-11-08 16:56:28.340326] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:14:58.884 [2024-11-08 16:56:28.340390] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.884 "name": "raid_bdev1", 00:14:58.884 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:58.884 "strip_size_kb": 0, 00:14:58.884 "state": "online", 00:14:58.884 "raid_level": "raid1", 00:14:58.884 "superblock": true, 00:14:58.884 "num_base_bdevs": 4, 00:14:58.884 "num_base_bdevs_discovered": 3, 00:14:58.884 "num_base_bdevs_operational": 3, 00:14:58.884 "process": { 00:14:58.884 "type": "rebuild", 00:14:58.884 "target": "spare", 00:14:58.884 "progress": { 00:14:58.884 "blocks": 18432, 00:14:58.884 "percent": 29 00:14:58.884 } 00:14:58.884 }, 00:14:58.884 "base_bdevs_list": [ 00:14:58.884 { 00:14:58.884 "name": "spare", 00:14:58.884 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:14:58.884 "is_configured": true, 00:14:58.884 "data_offset": 2048, 00:14:58.884 "data_size": 63488 00:14:58.884 }, 00:14:58.884 { 00:14:58.884 "name": null, 00:14:58.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.884 "is_configured": false, 00:14:58.884 "data_offset": 0, 00:14:58.884 "data_size": 63488 00:14:58.884 }, 00:14:58.884 { 00:14:58.884 "name": "BaseBdev3", 00:14:58.884 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:58.884 "is_configured": true, 00:14:58.884 "data_offset": 2048, 00:14:58.884 "data_size": 63488 00:14:58.884 }, 00:14:58.884 { 00:14:58.884 "name": "BaseBdev4", 00:14:58.884 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:58.884 "is_configured": true, 00:14:58.884 "data_offset": 2048, 00:14:58.884 "data_size": 63488 00:14:58.884 } 00:14:58.884 ] 00:14:58.884 }' 00:14:58.884 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.143 [2024-11-08 16:56:28.470833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:59.143 [2024-11-08 16:56:28.471439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.143 "name": "raid_bdev1", 00:14:59.143 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:14:59.143 "strip_size_kb": 0, 00:14:59.143 "state": "online", 00:14:59.143 "raid_level": "raid1", 00:14:59.143 "superblock": true, 00:14:59.143 "num_base_bdevs": 4, 00:14:59.143 "num_base_bdevs_discovered": 3, 00:14:59.143 "num_base_bdevs_operational": 3, 00:14:59.143 "process": { 00:14:59.143 "type": "rebuild", 00:14:59.143 "target": "spare", 00:14:59.143 "progress": { 00:14:59.143 "blocks": 20480, 00:14:59.143 "percent": 32 00:14:59.143 } 00:14:59.143 }, 00:14:59.143 "base_bdevs_list": [ 00:14:59.143 { 00:14:59.143 "name": "spare", 00:14:59.143 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:14:59.143 "is_configured": true, 00:14:59.143 "data_offset": 2048, 00:14:59.143 "data_size": 63488 00:14:59.143 }, 00:14:59.143 { 00:14:59.143 "name": null, 00:14:59.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.143 "is_configured": false, 00:14:59.143 "data_offset": 0, 00:14:59.143 "data_size": 63488 00:14:59.143 }, 00:14:59.143 { 00:14:59.143 "name": "BaseBdev3", 00:14:59.143 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:14:59.143 "is_configured": true, 00:14:59.143 "data_offset": 2048, 00:14:59.143 "data_size": 63488 00:14:59.143 }, 00:14:59.143 { 00:14:59.143 "name": "BaseBdev4", 00:14:59.143 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:14:59.143 "is_configured": true, 00:14:59.143 "data_offset": 2048, 00:14:59.143 "data_size": 63488 00:14:59.143 } 00:14:59.143 ] 00:14:59.143 }' 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.143 [2024-11-08 16:56:28.606178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:59.143 [2024-11-08 16:56:28.606756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:59.143 130.00 IOPS, 390.00 MiB/s [2024-11-08T16:56:28.671Z] 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.143 16:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.712 [2024-11-08 16:56:28.945414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:59.712 [2024-11-08 16:56:29.065589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:59.972 [2024-11-08 16:56:29.422305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:59.972 [2024-11-08 16:56:29.422619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:00.324 115.40 IOPS, 346.20 MiB/s [2024-11-08T16:56:29.852Z] 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.324 "name": "raid_bdev1", 00:15:00.324 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:00.324 "strip_size_kb": 0, 00:15:00.324 "state": "online", 00:15:00.324 "raid_level": "raid1", 00:15:00.324 "superblock": true, 00:15:00.324 "num_base_bdevs": 4, 00:15:00.324 "num_base_bdevs_discovered": 3, 00:15:00.324 "num_base_bdevs_operational": 3, 00:15:00.324 "process": { 00:15:00.324 "type": "rebuild", 00:15:00.324 "target": "spare", 00:15:00.324 "progress": { 00:15:00.324 "blocks": 34816, 00:15:00.324 "percent": 54 00:15:00.324 } 00:15:00.324 }, 00:15:00.324 "base_bdevs_list": [ 00:15:00.324 { 00:15:00.324 "name": "spare", 00:15:00.324 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:00.324 "is_configured": true, 00:15:00.324 "data_offset": 2048, 00:15:00.324 "data_size": 63488 00:15:00.324 }, 00:15:00.324 { 00:15:00.324 "name": null, 00:15:00.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.324 "is_configured": false, 00:15:00.324 "data_offset": 0, 00:15:00.324 "data_size": 63488 00:15:00.324 }, 00:15:00.324 { 00:15:00.324 "name": "BaseBdev3", 00:15:00.324 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:00.324 "is_configured": true, 00:15:00.324 "data_offset": 2048, 00:15:00.324 "data_size": 63488 00:15:00.324 }, 00:15:00.324 { 00:15:00.324 "name": "BaseBdev4", 00:15:00.324 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:00.324 "is_configured": true, 00:15:00.324 "data_offset": 2048, 00:15:00.324 "data_size": 63488 00:15:00.324 } 00:15:00.324 ] 00:15:00.324 }' 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.324 16:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.584 [2024-11-08 16:56:29.902254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:01.413 104.00 IOPS, 312.00 MiB/s [2024-11-08T16:56:30.941Z] 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.413 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.413 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.414 "name": "raid_bdev1", 00:15:01.414 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:01.414 "strip_size_kb": 0, 00:15:01.414 "state": "online", 00:15:01.414 "raid_level": "raid1", 00:15:01.414 "superblock": true, 00:15:01.414 "num_base_bdevs": 4, 00:15:01.414 "num_base_bdevs_discovered": 3, 00:15:01.414 "num_base_bdevs_operational": 3, 00:15:01.414 "process": { 00:15:01.414 "type": "rebuild", 00:15:01.414 "target": "spare", 00:15:01.414 "progress": { 00:15:01.414 "blocks": 55296, 00:15:01.414 "percent": 87 00:15:01.414 } 00:15:01.414 }, 00:15:01.414 "base_bdevs_list": [ 00:15:01.414 { 00:15:01.414 "name": "spare", 00:15:01.414 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 2048, 00:15:01.414 "data_size": 63488 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": null, 00:15:01.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.414 "is_configured": false, 00:15:01.414 "data_offset": 0, 00:15:01.414 "data_size": 63488 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev3", 00:15:01.414 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 2048, 00:15:01.414 "data_size": 63488 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev4", 00:15:01.414 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 2048, 00:15:01.414 "data_size": 63488 00:15:01.414 } 00:15:01.414 ] 00:15:01.414 }' 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.414 16:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.673 [2024-11-08 16:56:31.114936] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:01.933 [2024-11-08 16:56:31.214834] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:01.933 [2024-11-08 16:56:31.216922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.451 94.00 IOPS, 282.00 MiB/s [2024-11-08T16:56:31.979Z] 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.451 "name": "raid_bdev1", 00:15:02.451 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:02.451 "strip_size_kb": 0, 00:15:02.451 "state": "online", 00:15:02.451 "raid_level": "raid1", 00:15:02.451 "superblock": true, 00:15:02.451 "num_base_bdevs": 4, 00:15:02.451 "num_base_bdevs_discovered": 3, 00:15:02.451 "num_base_bdevs_operational": 3, 00:15:02.451 "base_bdevs_list": [ 00:15:02.451 { 00:15:02.451 "name": "spare", 00:15:02.451 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:02.451 "is_configured": true, 00:15:02.451 "data_offset": 2048, 00:15:02.451 "data_size": 63488 00:15:02.451 }, 00:15:02.451 { 00:15:02.451 "name": null, 00:15:02.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.451 "is_configured": false, 00:15:02.451 "data_offset": 0, 00:15:02.451 "data_size": 63488 00:15:02.451 }, 00:15:02.451 { 00:15:02.451 "name": "BaseBdev3", 00:15:02.451 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:02.451 "is_configured": true, 00:15:02.451 "data_offset": 2048, 00:15:02.451 "data_size": 63488 00:15:02.451 }, 00:15:02.451 { 00:15:02.451 "name": "BaseBdev4", 00:15:02.451 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:02.451 "is_configured": true, 00:15:02.451 "data_offset": 2048, 00:15:02.451 "data_size": 63488 00:15:02.451 } 00:15:02.451 ] 00:15:02.451 }' 00:15:02.451 16:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.711 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:02.711 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.711 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.712 "name": "raid_bdev1", 00:15:02.712 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:02.712 "strip_size_kb": 0, 00:15:02.712 "state": "online", 00:15:02.712 "raid_level": "raid1", 00:15:02.712 "superblock": true, 00:15:02.712 "num_base_bdevs": 4, 00:15:02.712 "num_base_bdevs_discovered": 3, 00:15:02.712 "num_base_bdevs_operational": 3, 00:15:02.712 "base_bdevs_list": [ 00:15:02.712 { 00:15:02.712 "name": "spare", 00:15:02.712 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:02.712 "is_configured": true, 00:15:02.712 "data_offset": 2048, 00:15:02.712 "data_size": 63488 00:15:02.712 }, 00:15:02.712 { 00:15:02.712 "name": null, 00:15:02.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.712 "is_configured": false, 00:15:02.712 "data_offset": 0, 00:15:02.712 "data_size": 63488 00:15:02.712 }, 00:15:02.712 { 00:15:02.712 "name": "BaseBdev3", 00:15:02.712 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:02.712 "is_configured": true, 00:15:02.712 "data_offset": 2048, 00:15:02.712 "data_size": 63488 00:15:02.712 }, 00:15:02.712 { 00:15:02.712 "name": "BaseBdev4", 00:15:02.712 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:02.712 "is_configured": true, 00:15:02.712 "data_offset": 2048, 00:15:02.712 "data_size": 63488 00:15:02.712 } 00:15:02.712 ] 00:15:02.712 }' 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.712 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.972 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.972 "name": "raid_bdev1", 00:15:02.972 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:02.972 "strip_size_kb": 0, 00:15:02.972 "state": "online", 00:15:02.972 "raid_level": "raid1", 00:15:02.972 "superblock": true, 00:15:02.972 "num_base_bdevs": 4, 00:15:02.972 "num_base_bdevs_discovered": 3, 00:15:02.972 "num_base_bdevs_operational": 3, 00:15:02.972 "base_bdevs_list": [ 00:15:02.972 { 00:15:02.972 "name": "spare", 00:15:02.972 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:02.972 "is_configured": true, 00:15:02.972 "data_offset": 2048, 00:15:02.972 "data_size": 63488 00:15:02.972 }, 00:15:02.972 { 00:15:02.972 "name": null, 00:15:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.972 "is_configured": false, 00:15:02.972 "data_offset": 0, 00:15:02.972 "data_size": 63488 00:15:02.972 }, 00:15:02.972 { 00:15:02.972 "name": "BaseBdev3", 00:15:02.972 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:02.972 "is_configured": true, 00:15:02.972 "data_offset": 2048, 00:15:02.972 "data_size": 63488 00:15:02.972 }, 00:15:02.972 { 00:15:02.972 "name": "BaseBdev4", 00:15:02.972 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:02.972 "is_configured": true, 00:15:02.972 "data_offset": 2048, 00:15:02.972 "data_size": 63488 00:15:02.972 } 00:15:02.972 ] 00:15:02.972 }' 00:15:02.972 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.972 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.231 86.25 IOPS, 258.75 MiB/s [2024-11-08T16:56:32.759Z] 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.231 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.231 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.231 [2024-11-08 16:56:32.674131] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.231 [2024-11-08 16:56:32.674174] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.231 00:15:03.231 Latency(us) 00:15:03.231 [2024-11-08T16:56:32.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.231 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:03.231 raid_bdev1 : 8.16 85.01 255.04 0.00 0.00 15657.66 354.15 113099.68 00:15:03.231 [2024-11-08T16:56:32.759Z] =================================================================================================================== 00:15:03.231 [2024-11-08T16:56:32.759Z] Total : 85.01 255.04 0.00 0.00 15657.66 354.15 113099.68 00:15:03.491 [2024-11-08 16:56:32.758410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.491 [2024-11-08 16:56:32.758505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.491 [2024-11-08 16:56:32.758631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.491 [2024-11-08 16:56:32.758662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:03.491 { 00:15:03.491 "results": [ 00:15:03.491 { 00:15:03.491 "job": "raid_bdev1", 00:15:03.491 "core_mask": "0x1", 00:15:03.491 "workload": "randrw", 00:15:03.491 "percentage": 50, 00:15:03.491 "status": "finished", 00:15:03.491 "queue_depth": 2, 00:15:03.491 "io_size": 3145728, 00:15:03.491 "runtime": 8.16354, 00:15:03.491 "iops": 85.01213934151116, 00:15:03.491 "mibps": 255.03641802453348, 00:15:03.491 "io_failed": 0, 00:15:03.491 "io_timeout": 0, 00:15:03.491 "avg_latency_us": 15657.656257629336, 00:15:03.491 "min_latency_us": 354.15196506550217, 00:15:03.491 "max_latency_us": 113099.68209606987 00:15:03.491 } 00:15:03.491 ], 00:15:03.491 "core_count": 1 00:15:03.491 } 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.491 16:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:03.751 /dev/nbd0 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.751 1+0 records in 00:15:03.751 1+0 records out 00:15:03.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314004 s, 13.0 MB/s 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:03.751 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.752 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:04.012 /dev/nbd1 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.012 1+0 records in 00:15:04.012 1+0 records out 00:15:04.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456945 s, 9.0 MB/s 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.012 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:04.272 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.273 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:04.273 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.273 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.273 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.273 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.273 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:04.533 /dev/nbd1 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:04.533 16:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.533 1+0 records in 00:15:04.533 1+0 records out 00:15:04.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292927 s, 14.0 MB/s 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.533 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.793 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.053 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.313 [2024-11-08 16:56:34.632539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.313 [2024-11-08 16:56:34.632613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.313 [2024-11-08 16:56:34.632656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:05.313 [2024-11-08 16:56:34.632668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.313 [2024-11-08 16:56:34.635133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.313 [2024-11-08 16:56:34.635176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.313 [2024-11-08 16:56:34.635308] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:05.313 [2024-11-08 16:56:34.635376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.313 [2024-11-08 16:56:34.635512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.313 [2024-11-08 16:56:34.635646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.313 spare 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.313 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.313 [2024-11-08 16:56:34.735590] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:05.314 [2024-11-08 16:56:34.735678] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:05.314 [2024-11-08 16:56:34.736072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:15:05.314 [2024-11-08 16:56:34.736279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:05.314 [2024-11-08 16:56:34.736302] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:05.314 [2024-11-08 16:56:34.736520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.314 "name": "raid_bdev1", 00:15:05.314 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:05.314 "strip_size_kb": 0, 00:15:05.314 "state": "online", 00:15:05.314 "raid_level": "raid1", 00:15:05.314 "superblock": true, 00:15:05.314 "num_base_bdevs": 4, 00:15:05.314 "num_base_bdevs_discovered": 3, 00:15:05.314 "num_base_bdevs_operational": 3, 00:15:05.314 "base_bdevs_list": [ 00:15:05.314 { 00:15:05.314 "name": "spare", 00:15:05.314 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:05.314 "is_configured": true, 00:15:05.314 "data_offset": 2048, 00:15:05.314 "data_size": 63488 00:15:05.314 }, 00:15:05.314 { 00:15:05.314 "name": null, 00:15:05.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.314 "is_configured": false, 00:15:05.314 "data_offset": 2048, 00:15:05.314 "data_size": 63488 00:15:05.314 }, 00:15:05.314 { 00:15:05.314 "name": "BaseBdev3", 00:15:05.314 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:05.314 "is_configured": true, 00:15:05.314 "data_offset": 2048, 00:15:05.314 "data_size": 63488 00:15:05.314 }, 00:15:05.314 { 00:15:05.314 "name": "BaseBdev4", 00:15:05.314 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:05.314 "is_configured": true, 00:15:05.314 "data_offset": 2048, 00:15:05.314 "data_size": 63488 00:15:05.314 } 00:15:05.314 ] 00:15:05.314 }' 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.314 16:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.882 "name": "raid_bdev1", 00:15:05.882 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:05.882 "strip_size_kb": 0, 00:15:05.882 "state": "online", 00:15:05.882 "raid_level": "raid1", 00:15:05.882 "superblock": true, 00:15:05.882 "num_base_bdevs": 4, 00:15:05.882 "num_base_bdevs_discovered": 3, 00:15:05.882 "num_base_bdevs_operational": 3, 00:15:05.882 "base_bdevs_list": [ 00:15:05.882 { 00:15:05.882 "name": "spare", 00:15:05.882 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:05.882 "is_configured": true, 00:15:05.882 "data_offset": 2048, 00:15:05.882 "data_size": 63488 00:15:05.882 }, 00:15:05.882 { 00:15:05.882 "name": null, 00:15:05.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.882 "is_configured": false, 00:15:05.882 "data_offset": 2048, 00:15:05.882 "data_size": 63488 00:15:05.882 }, 00:15:05.882 { 00:15:05.882 "name": "BaseBdev3", 00:15:05.882 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:05.882 "is_configured": true, 00:15:05.882 "data_offset": 2048, 00:15:05.882 "data_size": 63488 00:15:05.882 }, 00:15:05.882 { 00:15:05.882 "name": "BaseBdev4", 00:15:05.882 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:05.882 "is_configured": true, 00:15:05.882 "data_offset": 2048, 00:15:05.882 "data_size": 63488 00:15:05.882 } 00:15:05.882 ] 00:15:05.882 }' 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.882 [2024-11-08 16:56:35.383576] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.882 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.142 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.142 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.142 "name": "raid_bdev1", 00:15:06.142 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:06.142 "strip_size_kb": 0, 00:15:06.142 "state": "online", 00:15:06.142 "raid_level": "raid1", 00:15:06.142 "superblock": true, 00:15:06.142 "num_base_bdevs": 4, 00:15:06.142 "num_base_bdevs_discovered": 2, 00:15:06.142 "num_base_bdevs_operational": 2, 00:15:06.142 "base_bdevs_list": [ 00:15:06.142 { 00:15:06.142 "name": null, 00:15:06.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.142 "is_configured": false, 00:15:06.142 "data_offset": 0, 00:15:06.142 "data_size": 63488 00:15:06.142 }, 00:15:06.142 { 00:15:06.142 "name": null, 00:15:06.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.142 "is_configured": false, 00:15:06.142 "data_offset": 2048, 00:15:06.142 "data_size": 63488 00:15:06.142 }, 00:15:06.142 { 00:15:06.142 "name": "BaseBdev3", 00:15:06.142 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:06.142 "is_configured": true, 00:15:06.142 "data_offset": 2048, 00:15:06.142 "data_size": 63488 00:15:06.142 }, 00:15:06.142 { 00:15:06.142 "name": "BaseBdev4", 00:15:06.142 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:06.142 "is_configured": true, 00:15:06.142 "data_offset": 2048, 00:15:06.142 "data_size": 63488 00:15:06.142 } 00:15:06.142 ] 00:15:06.142 }' 00:15:06.142 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.142 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.402 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.402 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.402 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.402 [2024-11-08 16:56:35.879396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.402 [2024-11-08 16:56:35.879666] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:06.402 [2024-11-08 16:56:35.879693] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:06.402 [2024-11-08 16:56:35.879759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.402 [2024-11-08 16:56:35.883732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:15:06.402 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.402 16:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:06.402 [2024-11-08 16:56:35.886060] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.780 "name": "raid_bdev1", 00:15:07.780 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:07.780 "strip_size_kb": 0, 00:15:07.780 "state": "online", 00:15:07.780 "raid_level": "raid1", 00:15:07.780 "superblock": true, 00:15:07.780 "num_base_bdevs": 4, 00:15:07.780 "num_base_bdevs_discovered": 3, 00:15:07.780 "num_base_bdevs_operational": 3, 00:15:07.780 "process": { 00:15:07.780 "type": "rebuild", 00:15:07.780 "target": "spare", 00:15:07.780 "progress": { 00:15:07.780 "blocks": 20480, 00:15:07.780 "percent": 32 00:15:07.780 } 00:15:07.780 }, 00:15:07.780 "base_bdevs_list": [ 00:15:07.780 { 00:15:07.780 "name": "spare", 00:15:07.780 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:07.780 "is_configured": true, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 }, 00:15:07.780 { 00:15:07.780 "name": null, 00:15:07.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.780 "is_configured": false, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 }, 00:15:07.780 { 00:15:07.780 "name": "BaseBdev3", 00:15:07.780 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:07.780 "is_configured": true, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 }, 00:15:07.780 { 00:15:07.780 "name": "BaseBdev4", 00:15:07.780 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:07.780 "is_configured": true, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 } 00:15:07.780 ] 00:15:07.780 }' 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.780 16:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 [2024-11-08 16:56:37.039508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.780 [2024-11-08 16:56:37.091938] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:07.780 [2024-11-08 16:56:37.092148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.780 [2024-11-08 16:56:37.092177] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.780 [2024-11-08 16:56:37.092187] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.780 "name": "raid_bdev1", 00:15:07.780 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:07.780 "strip_size_kb": 0, 00:15:07.780 "state": "online", 00:15:07.780 "raid_level": "raid1", 00:15:07.780 "superblock": true, 00:15:07.780 "num_base_bdevs": 4, 00:15:07.780 "num_base_bdevs_discovered": 2, 00:15:07.780 "num_base_bdevs_operational": 2, 00:15:07.780 "base_bdevs_list": [ 00:15:07.780 { 00:15:07.780 "name": null, 00:15:07.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.780 "is_configured": false, 00:15:07.780 "data_offset": 0, 00:15:07.780 "data_size": 63488 00:15:07.780 }, 00:15:07.780 { 00:15:07.780 "name": null, 00:15:07.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.780 "is_configured": false, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 }, 00:15:07.780 { 00:15:07.780 "name": "BaseBdev3", 00:15:07.780 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:07.780 "is_configured": true, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 }, 00:15:07.780 { 00:15:07.780 "name": "BaseBdev4", 00:15:07.780 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:07.780 "is_configured": true, 00:15:07.780 "data_offset": 2048, 00:15:07.780 "data_size": 63488 00:15:07.780 } 00:15:07.780 ] 00:15:07.780 }' 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.780 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.039 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.039 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.039 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.039 [2024-11-08 16:56:37.539922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.039 [2024-11-08 16:56:37.540072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.039 [2024-11-08 16:56:37.540110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:08.039 [2024-11-08 16:56:37.540121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.039 [2024-11-08 16:56:37.540651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.039 [2024-11-08 16:56:37.540682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.039 [2024-11-08 16:56:37.540793] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:08.039 [2024-11-08 16:56:37.540807] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:08.039 [2024-11-08 16:56:37.540833] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:08.039 [2024-11-08 16:56:37.540876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.039 [2024-11-08 16:56:37.544725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:08.039 spare 00:15:08.039 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.039 16:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:08.039 [2024-11-08 16:56:37.546976] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.420 "name": "raid_bdev1", 00:15:09.420 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:09.420 "strip_size_kb": 0, 00:15:09.420 "state": "online", 00:15:09.420 "raid_level": "raid1", 00:15:09.420 "superblock": true, 00:15:09.420 "num_base_bdevs": 4, 00:15:09.420 "num_base_bdevs_discovered": 3, 00:15:09.420 "num_base_bdevs_operational": 3, 00:15:09.420 "process": { 00:15:09.420 "type": "rebuild", 00:15:09.420 "target": "spare", 00:15:09.420 "progress": { 00:15:09.420 "blocks": 20480, 00:15:09.420 "percent": 32 00:15:09.420 } 00:15:09.420 }, 00:15:09.420 "base_bdevs_list": [ 00:15:09.420 { 00:15:09.420 "name": "spare", 00:15:09.420 "uuid": "256092a7-1484-5588-88dc-642902677f54", 00:15:09.420 "is_configured": true, 00:15:09.420 "data_offset": 2048, 00:15:09.420 "data_size": 63488 00:15:09.420 }, 00:15:09.420 { 00:15:09.420 "name": null, 00:15:09.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.420 "is_configured": false, 00:15:09.420 "data_offset": 2048, 00:15:09.420 "data_size": 63488 00:15:09.420 }, 00:15:09.420 { 00:15:09.420 "name": "BaseBdev3", 00:15:09.420 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:09.420 "is_configured": true, 00:15:09.420 "data_offset": 2048, 00:15:09.420 "data_size": 63488 00:15:09.420 }, 00:15:09.420 { 00:15:09.420 "name": "BaseBdev4", 00:15:09.420 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:09.420 "is_configured": true, 00:15:09.420 "data_offset": 2048, 00:15:09.420 "data_size": 63488 00:15:09.420 } 00:15:09.420 ] 00:15:09.420 }' 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.420 [2024-11-08 16:56:38.696262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.420 [2024-11-08 16:56:38.752655] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:09.420 [2024-11-08 16:56:38.752771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.420 [2024-11-08 16:56:38.752792] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.420 [2024-11-08 16:56:38.752805] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.420 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.421 "name": "raid_bdev1", 00:15:09.421 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:09.421 "strip_size_kb": 0, 00:15:09.421 "state": "online", 00:15:09.421 "raid_level": "raid1", 00:15:09.421 "superblock": true, 00:15:09.421 "num_base_bdevs": 4, 00:15:09.421 "num_base_bdevs_discovered": 2, 00:15:09.421 "num_base_bdevs_operational": 2, 00:15:09.421 "base_bdevs_list": [ 00:15:09.421 { 00:15:09.421 "name": null, 00:15:09.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.421 "is_configured": false, 00:15:09.421 "data_offset": 0, 00:15:09.421 "data_size": 63488 00:15:09.421 }, 00:15:09.421 { 00:15:09.421 "name": null, 00:15:09.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.421 "is_configured": false, 00:15:09.421 "data_offset": 2048, 00:15:09.421 "data_size": 63488 00:15:09.421 }, 00:15:09.421 { 00:15:09.421 "name": "BaseBdev3", 00:15:09.421 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:09.421 "is_configured": true, 00:15:09.421 "data_offset": 2048, 00:15:09.421 "data_size": 63488 00:15:09.421 }, 00:15:09.421 { 00:15:09.421 "name": "BaseBdev4", 00:15:09.421 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:09.421 "is_configured": true, 00:15:09.421 "data_offset": 2048, 00:15:09.421 "data_size": 63488 00:15:09.421 } 00:15:09.421 ] 00:15:09.421 }' 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.421 16:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.989 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.989 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.990 "name": "raid_bdev1", 00:15:09.990 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:09.990 "strip_size_kb": 0, 00:15:09.990 "state": "online", 00:15:09.990 "raid_level": "raid1", 00:15:09.990 "superblock": true, 00:15:09.990 "num_base_bdevs": 4, 00:15:09.990 "num_base_bdevs_discovered": 2, 00:15:09.990 "num_base_bdevs_operational": 2, 00:15:09.990 "base_bdevs_list": [ 00:15:09.990 { 00:15:09.990 "name": null, 00:15:09.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.990 "is_configured": false, 00:15:09.990 "data_offset": 0, 00:15:09.990 "data_size": 63488 00:15:09.990 }, 00:15:09.990 { 00:15:09.990 "name": null, 00:15:09.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.990 "is_configured": false, 00:15:09.990 "data_offset": 2048, 00:15:09.990 "data_size": 63488 00:15:09.990 }, 00:15:09.990 { 00:15:09.990 "name": "BaseBdev3", 00:15:09.990 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:09.990 "is_configured": true, 00:15:09.990 "data_offset": 2048, 00:15:09.990 "data_size": 63488 00:15:09.990 }, 00:15:09.990 { 00:15:09.990 "name": "BaseBdev4", 00:15:09.990 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:09.990 "is_configured": true, 00:15:09.990 "data_offset": 2048, 00:15:09.990 "data_size": 63488 00:15:09.990 } 00:15:09.990 ] 00:15:09.990 }' 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.990 [2024-11-08 16:56:39.372349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.990 [2024-11-08 16:56:39.372518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.990 [2024-11-08 16:56:39.372549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:09.990 [2024-11-08 16:56:39.372562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.990 [2024-11-08 16:56:39.373091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.990 [2024-11-08 16:56:39.373126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.990 [2024-11-08 16:56:39.373225] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:09.990 [2024-11-08 16:56:39.373272] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:09.990 [2024-11-08 16:56:39.373282] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:09.990 [2024-11-08 16:56:39.373297] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:09.990 BaseBdev1 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.990 16:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.942 "name": "raid_bdev1", 00:15:10.942 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:10.942 "strip_size_kb": 0, 00:15:10.942 "state": "online", 00:15:10.942 "raid_level": "raid1", 00:15:10.942 "superblock": true, 00:15:10.942 "num_base_bdevs": 4, 00:15:10.942 "num_base_bdevs_discovered": 2, 00:15:10.942 "num_base_bdevs_operational": 2, 00:15:10.942 "base_bdevs_list": [ 00:15:10.942 { 00:15:10.942 "name": null, 00:15:10.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.942 "is_configured": false, 00:15:10.942 "data_offset": 0, 00:15:10.942 "data_size": 63488 00:15:10.942 }, 00:15:10.942 { 00:15:10.942 "name": null, 00:15:10.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.942 "is_configured": false, 00:15:10.942 "data_offset": 2048, 00:15:10.942 "data_size": 63488 00:15:10.942 }, 00:15:10.942 { 00:15:10.942 "name": "BaseBdev3", 00:15:10.942 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:10.942 "is_configured": true, 00:15:10.942 "data_offset": 2048, 00:15:10.942 "data_size": 63488 00:15:10.942 }, 00:15:10.942 { 00:15:10.942 "name": "BaseBdev4", 00:15:10.942 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:10.942 "is_configured": true, 00:15:10.942 "data_offset": 2048, 00:15:10.942 "data_size": 63488 00:15:10.942 } 00:15:10.942 ] 00:15:10.942 }' 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.942 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.511 "name": "raid_bdev1", 00:15:11.511 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:11.511 "strip_size_kb": 0, 00:15:11.511 "state": "online", 00:15:11.511 "raid_level": "raid1", 00:15:11.511 "superblock": true, 00:15:11.511 "num_base_bdevs": 4, 00:15:11.511 "num_base_bdevs_discovered": 2, 00:15:11.511 "num_base_bdevs_operational": 2, 00:15:11.511 "base_bdevs_list": [ 00:15:11.511 { 00:15:11.511 "name": null, 00:15:11.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.511 "is_configured": false, 00:15:11.511 "data_offset": 0, 00:15:11.511 "data_size": 63488 00:15:11.511 }, 00:15:11.511 { 00:15:11.511 "name": null, 00:15:11.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.511 "is_configured": false, 00:15:11.511 "data_offset": 2048, 00:15:11.511 "data_size": 63488 00:15:11.511 }, 00:15:11.511 { 00:15:11.511 "name": "BaseBdev3", 00:15:11.511 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:11.511 "is_configured": true, 00:15:11.511 "data_offset": 2048, 00:15:11.511 "data_size": 63488 00:15:11.511 }, 00:15:11.511 { 00:15:11.511 "name": "BaseBdev4", 00:15:11.511 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:11.511 "is_configured": true, 00:15:11.511 "data_offset": 2048, 00:15:11.511 "data_size": 63488 00:15:11.511 } 00:15:11.511 ] 00:15:11.511 }' 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.511 [2024-11-08 16:56:40.994667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.511 [2024-11-08 16:56:40.994951] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:11.511 [2024-11-08 16:56:40.995026] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:11.511 request: 00:15:11.511 { 00:15:11.511 "base_bdev": "BaseBdev1", 00:15:11.511 "raid_bdev": "raid_bdev1", 00:15:11.511 "method": "bdev_raid_add_base_bdev", 00:15:11.511 "req_id": 1 00:15:11.511 } 00:15:11.511 Got JSON-RPC error response 00:15:11.511 response: 00:15:11.511 { 00:15:11.511 "code": -22, 00:15:11.511 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:11.511 } 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.511 16:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.511 16:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.511 16:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.921 "name": "raid_bdev1", 00:15:12.921 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:12.921 "strip_size_kb": 0, 00:15:12.921 "state": "online", 00:15:12.921 "raid_level": "raid1", 00:15:12.921 "superblock": true, 00:15:12.921 "num_base_bdevs": 4, 00:15:12.921 "num_base_bdevs_discovered": 2, 00:15:12.921 "num_base_bdevs_operational": 2, 00:15:12.921 "base_bdevs_list": [ 00:15:12.921 { 00:15:12.921 "name": null, 00:15:12.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.921 "is_configured": false, 00:15:12.921 "data_offset": 0, 00:15:12.921 "data_size": 63488 00:15:12.921 }, 00:15:12.921 { 00:15:12.921 "name": null, 00:15:12.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.921 "is_configured": false, 00:15:12.921 "data_offset": 2048, 00:15:12.921 "data_size": 63488 00:15:12.921 }, 00:15:12.921 { 00:15:12.921 "name": "BaseBdev3", 00:15:12.921 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:12.921 "is_configured": true, 00:15:12.921 "data_offset": 2048, 00:15:12.921 "data_size": 63488 00:15:12.921 }, 00:15:12.921 { 00:15:12.921 "name": "BaseBdev4", 00:15:12.921 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:12.921 "is_configured": true, 00:15:12.921 "data_offset": 2048, 00:15:12.921 "data_size": 63488 00:15:12.921 } 00:15:12.921 ] 00:15:12.921 }' 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.921 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.181 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.182 "name": "raid_bdev1", 00:15:13.182 "uuid": "bef382cf-8977-4574-97b4-099ee619ec43", 00:15:13.182 "strip_size_kb": 0, 00:15:13.182 "state": "online", 00:15:13.182 "raid_level": "raid1", 00:15:13.182 "superblock": true, 00:15:13.182 "num_base_bdevs": 4, 00:15:13.182 "num_base_bdevs_discovered": 2, 00:15:13.182 "num_base_bdevs_operational": 2, 00:15:13.182 "base_bdevs_list": [ 00:15:13.182 { 00:15:13.182 "name": null, 00:15:13.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.182 "is_configured": false, 00:15:13.182 "data_offset": 0, 00:15:13.182 "data_size": 63488 00:15:13.182 }, 00:15:13.182 { 00:15:13.182 "name": null, 00:15:13.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.182 "is_configured": false, 00:15:13.182 "data_offset": 2048, 00:15:13.182 "data_size": 63488 00:15:13.182 }, 00:15:13.182 { 00:15:13.182 "name": "BaseBdev3", 00:15:13.182 "uuid": "9b711235-f472-5a97-8ee0-21d96d2275ee", 00:15:13.182 "is_configured": true, 00:15:13.182 "data_offset": 2048, 00:15:13.182 "data_size": 63488 00:15:13.182 }, 00:15:13.182 { 00:15:13.182 "name": "BaseBdev4", 00:15:13.182 "uuid": "8b246a2b-3ab6-5021-a5ee-80da2ae00d31", 00:15:13.182 "is_configured": true, 00:15:13.182 "data_offset": 2048, 00:15:13.182 "data_size": 63488 00:15:13.182 } 00:15:13.182 ] 00:15:13.182 }' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89828 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89828 ']' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89828 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89828 00:15:13.182 killing process with pid 89828 00:15:13.182 Received shutdown signal, test time was about 18.067339 seconds 00:15:13.182 00:15:13.182 Latency(us) 00:15:13.182 [2024-11-08T16:56:42.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.182 [2024-11-08T16:56:42.710Z] =================================================================================================================== 00:15:13.182 [2024-11-08T16:56:42.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89828' 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89828 00:15:13.182 [2024-11-08 16:56:42.638654] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.182 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89828 00:15:13.182 [2024-11-08 16:56:42.638828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.182 [2024-11-08 16:56:42.638910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.182 [2024-11-08 16:56:42.638921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:13.182 [2024-11-08 16:56:42.687847] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.441 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:13.441 00:15:13.441 real 0m20.151s 00:15:13.441 user 0m27.020s 00:15:13.441 sys 0m2.553s 00:15:13.441 ************************************ 00:15:13.441 END TEST raid_rebuild_test_sb_io 00:15:13.441 ************************************ 00:15:13.441 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.441 16:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.701 16:56:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:13.701 16:56:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:13.701 16:56:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:13.701 16:56:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:13.701 16:56:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.701 ************************************ 00:15:13.701 START TEST raid5f_state_function_test 00:15:13.701 ************************************ 00:15:13.701 16:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:15:13.701 16:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:13.701 16:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:13.701 16:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:13.701 16:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90540 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90540' 00:15:13.701 Process raid pid: 90540 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90540 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90540 ']' 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.701 16:56:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.701 [2024-11-08 16:56:43.088993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:13.701 [2024-11-08 16:56:43.089204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.960 [2024-11-08 16:56:43.251882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.960 [2024-11-08 16:56:43.303701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.960 [2024-11-08 16:56:43.346967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.960 [2024-11-08 16:56:43.347092] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.528 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.528 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:14.528 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:14.528 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.528 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.529 [2024-11-08 16:56:44.020908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.529 [2024-11-08 16:56:44.020964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.529 [2024-11-08 16:56:44.020978] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.529 [2024-11-08 16:56:44.020989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.529 [2024-11-08 16:56:44.020995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.529 [2024-11-08 16:56:44.021009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.529 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.788 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.788 "name": "Existed_Raid", 00:15:14.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.788 "strip_size_kb": 64, 00:15:14.788 "state": "configuring", 00:15:14.788 "raid_level": "raid5f", 00:15:14.788 "superblock": false, 00:15:14.788 "num_base_bdevs": 3, 00:15:14.788 "num_base_bdevs_discovered": 0, 00:15:14.788 "num_base_bdevs_operational": 3, 00:15:14.788 "base_bdevs_list": [ 00:15:14.788 { 00:15:14.788 "name": "BaseBdev1", 00:15:14.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.788 "is_configured": false, 00:15:14.788 "data_offset": 0, 00:15:14.788 "data_size": 0 00:15:14.788 }, 00:15:14.788 { 00:15:14.788 "name": "BaseBdev2", 00:15:14.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.788 "is_configured": false, 00:15:14.788 "data_offset": 0, 00:15:14.788 "data_size": 0 00:15:14.788 }, 00:15:14.788 { 00:15:14.788 "name": "BaseBdev3", 00:15:14.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.788 "is_configured": false, 00:15:14.788 "data_offset": 0, 00:15:14.788 "data_size": 0 00:15:14.788 } 00:15:14.788 ] 00:15:14.788 }' 00:15:14.788 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.788 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.048 [2024-11-08 16:56:44.472078] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.048 [2024-11-08 16:56:44.472194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.048 [2024-11-08 16:56:44.484078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.048 [2024-11-08 16:56:44.484165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.048 [2024-11-08 16:56:44.484196] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.048 [2024-11-08 16:56:44.484220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.048 [2024-11-08 16:56:44.484239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.048 [2024-11-08 16:56:44.484261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.048 [2024-11-08 16:56:44.505286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.048 BaseBdev1 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.048 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.048 [ 00:15:15.048 { 00:15:15.048 "name": "BaseBdev1", 00:15:15.048 "aliases": [ 00:15:15.048 "a767efb0-1a47-47c7-bcc5-b3ffa103f421" 00:15:15.048 ], 00:15:15.048 "product_name": "Malloc disk", 00:15:15.048 "block_size": 512, 00:15:15.048 "num_blocks": 65536, 00:15:15.048 "uuid": "a767efb0-1a47-47c7-bcc5-b3ffa103f421", 00:15:15.048 "assigned_rate_limits": { 00:15:15.048 "rw_ios_per_sec": 0, 00:15:15.048 "rw_mbytes_per_sec": 0, 00:15:15.048 "r_mbytes_per_sec": 0, 00:15:15.048 "w_mbytes_per_sec": 0 00:15:15.048 }, 00:15:15.048 "claimed": true, 00:15:15.048 "claim_type": "exclusive_write", 00:15:15.048 "zoned": false, 00:15:15.048 "supported_io_types": { 00:15:15.048 "read": true, 00:15:15.048 "write": true, 00:15:15.048 "unmap": true, 00:15:15.048 "flush": true, 00:15:15.048 "reset": true, 00:15:15.048 "nvme_admin": false, 00:15:15.048 "nvme_io": false, 00:15:15.048 "nvme_io_md": false, 00:15:15.048 "write_zeroes": true, 00:15:15.048 "zcopy": true, 00:15:15.048 "get_zone_info": false, 00:15:15.048 "zone_management": false, 00:15:15.048 "zone_append": false, 00:15:15.048 "compare": false, 00:15:15.048 "compare_and_write": false, 00:15:15.048 "abort": true, 00:15:15.048 "seek_hole": false, 00:15:15.048 "seek_data": false, 00:15:15.048 "copy": true, 00:15:15.048 "nvme_iov_md": false 00:15:15.048 }, 00:15:15.048 "memory_domains": [ 00:15:15.048 { 00:15:15.048 "dma_device_id": "system", 00:15:15.049 "dma_device_type": 1 00:15:15.049 }, 00:15:15.049 { 00:15:15.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.049 "dma_device_type": 2 00:15:15.049 } 00:15:15.049 ], 00:15:15.049 "driver_specific": {} 00:15:15.049 } 00:15:15.049 ] 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.049 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.308 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.308 "name": "Existed_Raid", 00:15:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.308 "strip_size_kb": 64, 00:15:15.308 "state": "configuring", 00:15:15.308 "raid_level": "raid5f", 00:15:15.308 "superblock": false, 00:15:15.308 "num_base_bdevs": 3, 00:15:15.308 "num_base_bdevs_discovered": 1, 00:15:15.308 "num_base_bdevs_operational": 3, 00:15:15.308 "base_bdevs_list": [ 00:15:15.308 { 00:15:15.308 "name": "BaseBdev1", 00:15:15.308 "uuid": "a767efb0-1a47-47c7-bcc5-b3ffa103f421", 00:15:15.308 "is_configured": true, 00:15:15.308 "data_offset": 0, 00:15:15.308 "data_size": 65536 00:15:15.308 }, 00:15:15.308 { 00:15:15.308 "name": "BaseBdev2", 00:15:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.308 "is_configured": false, 00:15:15.308 "data_offset": 0, 00:15:15.308 "data_size": 0 00:15:15.308 }, 00:15:15.308 { 00:15:15.308 "name": "BaseBdev3", 00:15:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.308 "is_configured": false, 00:15:15.308 "data_offset": 0, 00:15:15.308 "data_size": 0 00:15:15.308 } 00:15:15.308 ] 00:15:15.308 }' 00:15:15.308 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.308 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.568 [2024-11-08 16:56:44.952565] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.568 [2024-11-08 16:56:44.952702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.568 [2024-11-08 16:56:44.964583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.568 [2024-11-08 16:56:44.966696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.568 [2024-11-08 16:56:44.966779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.568 [2024-11-08 16:56:44.966817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.568 [2024-11-08 16:56:44.966856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.568 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.569 16:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.569 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.569 "name": "Existed_Raid", 00:15:15.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.569 "strip_size_kb": 64, 00:15:15.569 "state": "configuring", 00:15:15.569 "raid_level": "raid5f", 00:15:15.569 "superblock": false, 00:15:15.569 "num_base_bdevs": 3, 00:15:15.569 "num_base_bdevs_discovered": 1, 00:15:15.569 "num_base_bdevs_operational": 3, 00:15:15.569 "base_bdevs_list": [ 00:15:15.569 { 00:15:15.569 "name": "BaseBdev1", 00:15:15.569 "uuid": "a767efb0-1a47-47c7-bcc5-b3ffa103f421", 00:15:15.569 "is_configured": true, 00:15:15.569 "data_offset": 0, 00:15:15.569 "data_size": 65536 00:15:15.569 }, 00:15:15.569 { 00:15:15.569 "name": "BaseBdev2", 00:15:15.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.569 "is_configured": false, 00:15:15.569 "data_offset": 0, 00:15:15.569 "data_size": 0 00:15:15.569 }, 00:15:15.569 { 00:15:15.569 "name": "BaseBdev3", 00:15:15.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.569 "is_configured": false, 00:15:15.569 "data_offset": 0, 00:15:15.569 "data_size": 0 00:15:15.569 } 00:15:15.569 ] 00:15:15.569 }' 00:15:15.569 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.569 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.138 [2024-11-08 16:56:45.424232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.138 BaseBdev2 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.138 [ 00:15:16.138 { 00:15:16.138 "name": "BaseBdev2", 00:15:16.138 "aliases": [ 00:15:16.138 "5676a3a2-6850-434e-9a78-d2a2fccbcc49" 00:15:16.138 ], 00:15:16.138 "product_name": "Malloc disk", 00:15:16.138 "block_size": 512, 00:15:16.138 "num_blocks": 65536, 00:15:16.138 "uuid": "5676a3a2-6850-434e-9a78-d2a2fccbcc49", 00:15:16.138 "assigned_rate_limits": { 00:15:16.138 "rw_ios_per_sec": 0, 00:15:16.138 "rw_mbytes_per_sec": 0, 00:15:16.138 "r_mbytes_per_sec": 0, 00:15:16.138 "w_mbytes_per_sec": 0 00:15:16.138 }, 00:15:16.138 "claimed": true, 00:15:16.138 "claim_type": "exclusive_write", 00:15:16.138 "zoned": false, 00:15:16.138 "supported_io_types": { 00:15:16.138 "read": true, 00:15:16.138 "write": true, 00:15:16.138 "unmap": true, 00:15:16.138 "flush": true, 00:15:16.138 "reset": true, 00:15:16.138 "nvme_admin": false, 00:15:16.138 "nvme_io": false, 00:15:16.138 "nvme_io_md": false, 00:15:16.138 "write_zeroes": true, 00:15:16.138 "zcopy": true, 00:15:16.138 "get_zone_info": false, 00:15:16.138 "zone_management": false, 00:15:16.138 "zone_append": false, 00:15:16.138 "compare": false, 00:15:16.138 "compare_and_write": false, 00:15:16.138 "abort": true, 00:15:16.138 "seek_hole": false, 00:15:16.138 "seek_data": false, 00:15:16.138 "copy": true, 00:15:16.138 "nvme_iov_md": false 00:15:16.138 }, 00:15:16.138 "memory_domains": [ 00:15:16.138 { 00:15:16.138 "dma_device_id": "system", 00:15:16.138 "dma_device_type": 1 00:15:16.138 }, 00:15:16.138 { 00:15:16.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.138 "dma_device_type": 2 00:15:16.138 } 00:15:16.138 ], 00:15:16.138 "driver_specific": {} 00:15:16.138 } 00:15:16.138 ] 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.138 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.138 "name": "Existed_Raid", 00:15:16.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.138 "strip_size_kb": 64, 00:15:16.138 "state": "configuring", 00:15:16.138 "raid_level": "raid5f", 00:15:16.138 "superblock": false, 00:15:16.138 "num_base_bdevs": 3, 00:15:16.138 "num_base_bdevs_discovered": 2, 00:15:16.138 "num_base_bdevs_operational": 3, 00:15:16.138 "base_bdevs_list": [ 00:15:16.138 { 00:15:16.138 "name": "BaseBdev1", 00:15:16.138 "uuid": "a767efb0-1a47-47c7-bcc5-b3ffa103f421", 00:15:16.138 "is_configured": true, 00:15:16.139 "data_offset": 0, 00:15:16.139 "data_size": 65536 00:15:16.139 }, 00:15:16.139 { 00:15:16.139 "name": "BaseBdev2", 00:15:16.139 "uuid": "5676a3a2-6850-434e-9a78-d2a2fccbcc49", 00:15:16.139 "is_configured": true, 00:15:16.139 "data_offset": 0, 00:15:16.139 "data_size": 65536 00:15:16.139 }, 00:15:16.139 { 00:15:16.139 "name": "BaseBdev3", 00:15:16.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.139 "is_configured": false, 00:15:16.139 "data_offset": 0, 00:15:16.139 "data_size": 0 00:15:16.139 } 00:15:16.139 ] 00:15:16.139 }' 00:15:16.139 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.139 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.398 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.398 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.398 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.398 [2024-11-08 16:56:45.898513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.398 [2024-11-08 16:56:45.898581] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:16.398 [2024-11-08 16:56:45.898593] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:16.398 [2024-11-08 16:56:45.898898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:16.398 [2024-11-08 16:56:45.899352] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:16.398 [2024-11-08 16:56:45.899364] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:16.398 [2024-11-08 16:56:45.899578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.398 BaseBdev3 00:15:16.398 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.399 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.659 [ 00:15:16.659 { 00:15:16.659 "name": "BaseBdev3", 00:15:16.659 "aliases": [ 00:15:16.659 "922e8876-df2c-4ef1-849d-6552ce5e14e6" 00:15:16.659 ], 00:15:16.659 "product_name": "Malloc disk", 00:15:16.659 "block_size": 512, 00:15:16.659 "num_blocks": 65536, 00:15:16.659 "uuid": "922e8876-df2c-4ef1-849d-6552ce5e14e6", 00:15:16.659 "assigned_rate_limits": { 00:15:16.659 "rw_ios_per_sec": 0, 00:15:16.659 "rw_mbytes_per_sec": 0, 00:15:16.659 "r_mbytes_per_sec": 0, 00:15:16.659 "w_mbytes_per_sec": 0 00:15:16.659 }, 00:15:16.659 "claimed": true, 00:15:16.659 "claim_type": "exclusive_write", 00:15:16.659 "zoned": false, 00:15:16.659 "supported_io_types": { 00:15:16.659 "read": true, 00:15:16.659 "write": true, 00:15:16.659 "unmap": true, 00:15:16.659 "flush": true, 00:15:16.659 "reset": true, 00:15:16.659 "nvme_admin": false, 00:15:16.659 "nvme_io": false, 00:15:16.659 "nvme_io_md": false, 00:15:16.659 "write_zeroes": true, 00:15:16.659 "zcopy": true, 00:15:16.659 "get_zone_info": false, 00:15:16.659 "zone_management": false, 00:15:16.659 "zone_append": false, 00:15:16.659 "compare": false, 00:15:16.659 "compare_and_write": false, 00:15:16.659 "abort": true, 00:15:16.659 "seek_hole": false, 00:15:16.659 "seek_data": false, 00:15:16.659 "copy": true, 00:15:16.659 "nvme_iov_md": false 00:15:16.659 }, 00:15:16.659 "memory_domains": [ 00:15:16.659 { 00:15:16.659 "dma_device_id": "system", 00:15:16.659 "dma_device_type": 1 00:15:16.659 }, 00:15:16.659 { 00:15:16.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.659 "dma_device_type": 2 00:15:16.659 } 00:15:16.659 ], 00:15:16.659 "driver_specific": {} 00:15:16.659 } 00:15:16.659 ] 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.659 "name": "Existed_Raid", 00:15:16.659 "uuid": "72356c2d-418d-47b3-b3c9-e279c89255f7", 00:15:16.659 "strip_size_kb": 64, 00:15:16.659 "state": "online", 00:15:16.659 "raid_level": "raid5f", 00:15:16.659 "superblock": false, 00:15:16.659 "num_base_bdevs": 3, 00:15:16.659 "num_base_bdevs_discovered": 3, 00:15:16.659 "num_base_bdevs_operational": 3, 00:15:16.659 "base_bdevs_list": [ 00:15:16.659 { 00:15:16.659 "name": "BaseBdev1", 00:15:16.659 "uuid": "a767efb0-1a47-47c7-bcc5-b3ffa103f421", 00:15:16.659 "is_configured": true, 00:15:16.659 "data_offset": 0, 00:15:16.659 "data_size": 65536 00:15:16.659 }, 00:15:16.659 { 00:15:16.659 "name": "BaseBdev2", 00:15:16.659 "uuid": "5676a3a2-6850-434e-9a78-d2a2fccbcc49", 00:15:16.659 "is_configured": true, 00:15:16.659 "data_offset": 0, 00:15:16.659 "data_size": 65536 00:15:16.659 }, 00:15:16.659 { 00:15:16.659 "name": "BaseBdev3", 00:15:16.659 "uuid": "922e8876-df2c-4ef1-849d-6552ce5e14e6", 00:15:16.659 "is_configured": true, 00:15:16.659 "data_offset": 0, 00:15:16.659 "data_size": 65536 00:15:16.659 } 00:15:16.659 ] 00:15:16.659 }' 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.659 16:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.928 [2024-11-08 16:56:46.385952] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.928 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.928 "name": "Existed_Raid", 00:15:16.928 "aliases": [ 00:15:16.928 "72356c2d-418d-47b3-b3c9-e279c89255f7" 00:15:16.928 ], 00:15:16.928 "product_name": "Raid Volume", 00:15:16.928 "block_size": 512, 00:15:16.928 "num_blocks": 131072, 00:15:16.928 "uuid": "72356c2d-418d-47b3-b3c9-e279c89255f7", 00:15:16.928 "assigned_rate_limits": { 00:15:16.928 "rw_ios_per_sec": 0, 00:15:16.928 "rw_mbytes_per_sec": 0, 00:15:16.928 "r_mbytes_per_sec": 0, 00:15:16.928 "w_mbytes_per_sec": 0 00:15:16.928 }, 00:15:16.928 "claimed": false, 00:15:16.928 "zoned": false, 00:15:16.928 "supported_io_types": { 00:15:16.928 "read": true, 00:15:16.928 "write": true, 00:15:16.928 "unmap": false, 00:15:16.928 "flush": false, 00:15:16.928 "reset": true, 00:15:16.928 "nvme_admin": false, 00:15:16.928 "nvme_io": false, 00:15:16.928 "nvme_io_md": false, 00:15:16.928 "write_zeroes": true, 00:15:16.928 "zcopy": false, 00:15:16.928 "get_zone_info": false, 00:15:16.928 "zone_management": false, 00:15:16.928 "zone_append": false, 00:15:16.928 "compare": false, 00:15:16.928 "compare_and_write": false, 00:15:16.928 "abort": false, 00:15:16.928 "seek_hole": false, 00:15:16.928 "seek_data": false, 00:15:16.928 "copy": false, 00:15:16.928 "nvme_iov_md": false 00:15:16.928 }, 00:15:16.928 "driver_specific": { 00:15:16.928 "raid": { 00:15:16.928 "uuid": "72356c2d-418d-47b3-b3c9-e279c89255f7", 00:15:16.928 "strip_size_kb": 64, 00:15:16.928 "state": "online", 00:15:16.928 "raid_level": "raid5f", 00:15:16.928 "superblock": false, 00:15:16.928 "num_base_bdevs": 3, 00:15:16.928 "num_base_bdevs_discovered": 3, 00:15:16.928 "num_base_bdevs_operational": 3, 00:15:16.928 "base_bdevs_list": [ 00:15:16.928 { 00:15:16.928 "name": "BaseBdev1", 00:15:16.928 "uuid": "a767efb0-1a47-47c7-bcc5-b3ffa103f421", 00:15:16.928 "is_configured": true, 00:15:16.928 "data_offset": 0, 00:15:16.928 "data_size": 65536 00:15:16.928 }, 00:15:16.929 { 00:15:16.929 "name": "BaseBdev2", 00:15:16.929 "uuid": "5676a3a2-6850-434e-9a78-d2a2fccbcc49", 00:15:16.929 "is_configured": true, 00:15:16.929 "data_offset": 0, 00:15:16.929 "data_size": 65536 00:15:16.929 }, 00:15:16.929 { 00:15:16.929 "name": "BaseBdev3", 00:15:16.929 "uuid": "922e8876-df2c-4ef1-849d-6552ce5e14e6", 00:15:16.929 "is_configured": true, 00:15:16.929 "data_offset": 0, 00:15:16.929 "data_size": 65536 00:15:16.929 } 00:15:16.929 ] 00:15:16.929 } 00:15:16.929 } 00:15:16.929 }' 00:15:16.929 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.190 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:17.190 BaseBdev2 00:15:17.190 BaseBdev3' 00:15:17.190 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.191 [2024-11-08 16:56:46.701339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.191 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.500 "name": "Existed_Raid", 00:15:17.500 "uuid": "72356c2d-418d-47b3-b3c9-e279c89255f7", 00:15:17.500 "strip_size_kb": 64, 00:15:17.500 "state": "online", 00:15:17.500 "raid_level": "raid5f", 00:15:17.500 "superblock": false, 00:15:17.500 "num_base_bdevs": 3, 00:15:17.500 "num_base_bdevs_discovered": 2, 00:15:17.500 "num_base_bdevs_operational": 2, 00:15:17.500 "base_bdevs_list": [ 00:15:17.500 { 00:15:17.500 "name": null, 00:15:17.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.500 "is_configured": false, 00:15:17.500 "data_offset": 0, 00:15:17.500 "data_size": 65536 00:15:17.500 }, 00:15:17.500 { 00:15:17.500 "name": "BaseBdev2", 00:15:17.500 "uuid": "5676a3a2-6850-434e-9a78-d2a2fccbcc49", 00:15:17.500 "is_configured": true, 00:15:17.500 "data_offset": 0, 00:15:17.500 "data_size": 65536 00:15:17.500 }, 00:15:17.500 { 00:15:17.500 "name": "BaseBdev3", 00:15:17.500 "uuid": "922e8876-df2c-4ef1-849d-6552ce5e14e6", 00:15:17.500 "is_configured": true, 00:15:17.500 "data_offset": 0, 00:15:17.500 "data_size": 65536 00:15:17.500 } 00:15:17.500 ] 00:15:17.500 }' 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.500 16:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.760 [2024-11-08 16:56:47.216166] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.760 [2024-11-08 16:56:47.216260] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.760 [2024-11-08 16:56:47.227244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.760 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.020 [2024-11-08 16:56:47.287245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.020 [2024-11-08 16:56:47.287303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.020 BaseBdev2 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:18.020 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 [ 00:15:18.021 { 00:15:18.021 "name": "BaseBdev2", 00:15:18.021 "aliases": [ 00:15:18.021 "14c06d8a-3745-44a3-b955-b37d10f16321" 00:15:18.021 ], 00:15:18.021 "product_name": "Malloc disk", 00:15:18.021 "block_size": 512, 00:15:18.021 "num_blocks": 65536, 00:15:18.021 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:18.021 "assigned_rate_limits": { 00:15:18.021 "rw_ios_per_sec": 0, 00:15:18.021 "rw_mbytes_per_sec": 0, 00:15:18.021 "r_mbytes_per_sec": 0, 00:15:18.021 "w_mbytes_per_sec": 0 00:15:18.021 }, 00:15:18.021 "claimed": false, 00:15:18.021 "zoned": false, 00:15:18.021 "supported_io_types": { 00:15:18.021 "read": true, 00:15:18.021 "write": true, 00:15:18.021 "unmap": true, 00:15:18.021 "flush": true, 00:15:18.021 "reset": true, 00:15:18.021 "nvme_admin": false, 00:15:18.021 "nvme_io": false, 00:15:18.021 "nvme_io_md": false, 00:15:18.021 "write_zeroes": true, 00:15:18.021 "zcopy": true, 00:15:18.021 "get_zone_info": false, 00:15:18.021 "zone_management": false, 00:15:18.021 "zone_append": false, 00:15:18.021 "compare": false, 00:15:18.021 "compare_and_write": false, 00:15:18.021 "abort": true, 00:15:18.021 "seek_hole": false, 00:15:18.021 "seek_data": false, 00:15:18.021 "copy": true, 00:15:18.021 "nvme_iov_md": false 00:15:18.021 }, 00:15:18.021 "memory_domains": [ 00:15:18.021 { 00:15:18.021 "dma_device_id": "system", 00:15:18.021 "dma_device_type": 1 00:15:18.021 }, 00:15:18.021 { 00:15:18.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.021 "dma_device_type": 2 00:15:18.021 } 00:15:18.021 ], 00:15:18.021 "driver_specific": {} 00:15:18.021 } 00:15:18.021 ] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 BaseBdev3 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 [ 00:15:18.021 { 00:15:18.021 "name": "BaseBdev3", 00:15:18.021 "aliases": [ 00:15:18.021 "fd829c60-465b-45f8-a95f-19bd5141c8d5" 00:15:18.021 ], 00:15:18.021 "product_name": "Malloc disk", 00:15:18.021 "block_size": 512, 00:15:18.021 "num_blocks": 65536, 00:15:18.021 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:18.021 "assigned_rate_limits": { 00:15:18.021 "rw_ios_per_sec": 0, 00:15:18.021 "rw_mbytes_per_sec": 0, 00:15:18.021 "r_mbytes_per_sec": 0, 00:15:18.021 "w_mbytes_per_sec": 0 00:15:18.021 }, 00:15:18.021 "claimed": false, 00:15:18.021 "zoned": false, 00:15:18.021 "supported_io_types": { 00:15:18.021 "read": true, 00:15:18.021 "write": true, 00:15:18.021 "unmap": true, 00:15:18.021 "flush": true, 00:15:18.021 "reset": true, 00:15:18.021 "nvme_admin": false, 00:15:18.021 "nvme_io": false, 00:15:18.021 "nvme_io_md": false, 00:15:18.021 "write_zeroes": true, 00:15:18.021 "zcopy": true, 00:15:18.021 "get_zone_info": false, 00:15:18.021 "zone_management": false, 00:15:18.021 "zone_append": false, 00:15:18.021 "compare": false, 00:15:18.021 "compare_and_write": false, 00:15:18.021 "abort": true, 00:15:18.021 "seek_hole": false, 00:15:18.021 "seek_data": false, 00:15:18.021 "copy": true, 00:15:18.021 "nvme_iov_md": false 00:15:18.021 }, 00:15:18.021 "memory_domains": [ 00:15:18.021 { 00:15:18.021 "dma_device_id": "system", 00:15:18.021 "dma_device_type": 1 00:15:18.021 }, 00:15:18.021 { 00:15:18.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.021 "dma_device_type": 2 00:15:18.021 } 00:15:18.021 ], 00:15:18.021 "driver_specific": {} 00:15:18.021 } 00:15:18.021 ] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 [2024-11-08 16:56:47.457042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.021 [2024-11-08 16:56:47.457147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.021 [2024-11-08 16:56:47.457195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.021 [2024-11-08 16:56:47.459103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.021 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.021 "name": "Existed_Raid", 00:15:18.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.021 "strip_size_kb": 64, 00:15:18.021 "state": "configuring", 00:15:18.021 "raid_level": "raid5f", 00:15:18.021 "superblock": false, 00:15:18.021 "num_base_bdevs": 3, 00:15:18.021 "num_base_bdevs_discovered": 2, 00:15:18.021 "num_base_bdevs_operational": 3, 00:15:18.021 "base_bdevs_list": [ 00:15:18.021 { 00:15:18.021 "name": "BaseBdev1", 00:15:18.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.021 "is_configured": false, 00:15:18.021 "data_offset": 0, 00:15:18.021 "data_size": 0 00:15:18.021 }, 00:15:18.021 { 00:15:18.021 "name": "BaseBdev2", 00:15:18.021 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:18.021 "is_configured": true, 00:15:18.021 "data_offset": 0, 00:15:18.021 "data_size": 65536 00:15:18.021 }, 00:15:18.021 { 00:15:18.021 "name": "BaseBdev3", 00:15:18.022 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:18.022 "is_configured": true, 00:15:18.022 "data_offset": 0, 00:15:18.022 "data_size": 65536 00:15:18.022 } 00:15:18.022 ] 00:15:18.022 }' 00:15:18.022 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.022 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.589 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.590 [2024-11-08 16:56:47.932290] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.590 "name": "Existed_Raid", 00:15:18.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.590 "strip_size_kb": 64, 00:15:18.590 "state": "configuring", 00:15:18.590 "raid_level": "raid5f", 00:15:18.590 "superblock": false, 00:15:18.590 "num_base_bdevs": 3, 00:15:18.590 "num_base_bdevs_discovered": 1, 00:15:18.590 "num_base_bdevs_operational": 3, 00:15:18.590 "base_bdevs_list": [ 00:15:18.590 { 00:15:18.590 "name": "BaseBdev1", 00:15:18.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.590 "is_configured": false, 00:15:18.590 "data_offset": 0, 00:15:18.590 "data_size": 0 00:15:18.590 }, 00:15:18.590 { 00:15:18.590 "name": null, 00:15:18.590 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:18.590 "is_configured": false, 00:15:18.590 "data_offset": 0, 00:15:18.590 "data_size": 65536 00:15:18.590 }, 00:15:18.590 { 00:15:18.590 "name": "BaseBdev3", 00:15:18.590 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:18.590 "is_configured": true, 00:15:18.590 "data_offset": 0, 00:15:18.590 "data_size": 65536 00:15:18.590 } 00:15:18.590 ] 00:15:18.590 }' 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.590 16:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 [2024-11-08 16:56:48.458557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.159 BaseBdev1 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 [ 00:15:19.159 { 00:15:19.159 "name": "BaseBdev1", 00:15:19.159 "aliases": [ 00:15:19.159 "c1f70521-a938-44bd-afac-758b82573f33" 00:15:19.159 ], 00:15:19.159 "product_name": "Malloc disk", 00:15:19.159 "block_size": 512, 00:15:19.159 "num_blocks": 65536, 00:15:19.159 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:19.159 "assigned_rate_limits": { 00:15:19.159 "rw_ios_per_sec": 0, 00:15:19.159 "rw_mbytes_per_sec": 0, 00:15:19.159 "r_mbytes_per_sec": 0, 00:15:19.159 "w_mbytes_per_sec": 0 00:15:19.159 }, 00:15:19.159 "claimed": true, 00:15:19.159 "claim_type": "exclusive_write", 00:15:19.159 "zoned": false, 00:15:19.159 "supported_io_types": { 00:15:19.159 "read": true, 00:15:19.159 "write": true, 00:15:19.159 "unmap": true, 00:15:19.159 "flush": true, 00:15:19.159 "reset": true, 00:15:19.159 "nvme_admin": false, 00:15:19.159 "nvme_io": false, 00:15:19.159 "nvme_io_md": false, 00:15:19.159 "write_zeroes": true, 00:15:19.159 "zcopy": true, 00:15:19.159 "get_zone_info": false, 00:15:19.159 "zone_management": false, 00:15:19.159 "zone_append": false, 00:15:19.159 "compare": false, 00:15:19.159 "compare_and_write": false, 00:15:19.159 "abort": true, 00:15:19.159 "seek_hole": false, 00:15:19.159 "seek_data": false, 00:15:19.159 "copy": true, 00:15:19.159 "nvme_iov_md": false 00:15:19.159 }, 00:15:19.159 "memory_domains": [ 00:15:19.159 { 00:15:19.159 "dma_device_id": "system", 00:15:19.159 "dma_device_type": 1 00:15:19.159 }, 00:15:19.159 { 00:15:19.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.159 "dma_device_type": 2 00:15:19.159 } 00:15:19.159 ], 00:15:19.159 "driver_specific": {} 00:15:19.159 } 00:15:19.159 ] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.159 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.160 "name": "Existed_Raid", 00:15:19.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.160 "strip_size_kb": 64, 00:15:19.160 "state": "configuring", 00:15:19.160 "raid_level": "raid5f", 00:15:19.160 "superblock": false, 00:15:19.160 "num_base_bdevs": 3, 00:15:19.160 "num_base_bdevs_discovered": 2, 00:15:19.160 "num_base_bdevs_operational": 3, 00:15:19.160 "base_bdevs_list": [ 00:15:19.160 { 00:15:19.160 "name": "BaseBdev1", 00:15:19.160 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:19.160 "is_configured": true, 00:15:19.160 "data_offset": 0, 00:15:19.160 "data_size": 65536 00:15:19.160 }, 00:15:19.160 { 00:15:19.160 "name": null, 00:15:19.160 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:19.160 "is_configured": false, 00:15:19.160 "data_offset": 0, 00:15:19.160 "data_size": 65536 00:15:19.160 }, 00:15:19.160 { 00:15:19.160 "name": "BaseBdev3", 00:15:19.160 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:19.160 "is_configured": true, 00:15:19.160 "data_offset": 0, 00:15:19.160 "data_size": 65536 00:15:19.160 } 00:15:19.160 ] 00:15:19.160 }' 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.160 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.419 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.419 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.419 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.419 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.419 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.678 [2024-11-08 16:56:48.973793] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.678 16:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.678 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.678 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.678 "name": "Existed_Raid", 00:15:19.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.678 "strip_size_kb": 64, 00:15:19.678 "state": "configuring", 00:15:19.678 "raid_level": "raid5f", 00:15:19.678 "superblock": false, 00:15:19.678 "num_base_bdevs": 3, 00:15:19.678 "num_base_bdevs_discovered": 1, 00:15:19.678 "num_base_bdevs_operational": 3, 00:15:19.678 "base_bdevs_list": [ 00:15:19.678 { 00:15:19.678 "name": "BaseBdev1", 00:15:19.678 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:19.678 "is_configured": true, 00:15:19.678 "data_offset": 0, 00:15:19.678 "data_size": 65536 00:15:19.678 }, 00:15:19.678 { 00:15:19.678 "name": null, 00:15:19.678 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:19.678 "is_configured": false, 00:15:19.678 "data_offset": 0, 00:15:19.678 "data_size": 65536 00:15:19.678 }, 00:15:19.678 { 00:15:19.678 "name": null, 00:15:19.678 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:19.678 "is_configured": false, 00:15:19.678 "data_offset": 0, 00:15:19.678 "data_size": 65536 00:15:19.678 } 00:15:19.678 ] 00:15:19.678 }' 00:15:19.678 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.678 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.938 [2024-11-08 16:56:49.449031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.938 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.198 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.198 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.198 "name": "Existed_Raid", 00:15:20.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.198 "strip_size_kb": 64, 00:15:20.198 "state": "configuring", 00:15:20.198 "raid_level": "raid5f", 00:15:20.198 "superblock": false, 00:15:20.198 "num_base_bdevs": 3, 00:15:20.198 "num_base_bdevs_discovered": 2, 00:15:20.198 "num_base_bdevs_operational": 3, 00:15:20.198 "base_bdevs_list": [ 00:15:20.198 { 00:15:20.198 "name": "BaseBdev1", 00:15:20.198 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:20.198 "is_configured": true, 00:15:20.198 "data_offset": 0, 00:15:20.198 "data_size": 65536 00:15:20.198 }, 00:15:20.198 { 00:15:20.198 "name": null, 00:15:20.198 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:20.198 "is_configured": false, 00:15:20.198 "data_offset": 0, 00:15:20.198 "data_size": 65536 00:15:20.198 }, 00:15:20.198 { 00:15:20.198 "name": "BaseBdev3", 00:15:20.198 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:20.198 "is_configured": true, 00:15:20.198 "data_offset": 0, 00:15:20.198 "data_size": 65536 00:15:20.198 } 00:15:20.198 ] 00:15:20.198 }' 00:15:20.198 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.198 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.458 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.458 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.458 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.458 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.459 [2024-11-08 16:56:49.956240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.459 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.718 16:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.718 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.718 "name": "Existed_Raid", 00:15:20.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.718 "strip_size_kb": 64, 00:15:20.718 "state": "configuring", 00:15:20.718 "raid_level": "raid5f", 00:15:20.718 "superblock": false, 00:15:20.718 "num_base_bdevs": 3, 00:15:20.718 "num_base_bdevs_discovered": 1, 00:15:20.718 "num_base_bdevs_operational": 3, 00:15:20.718 "base_bdevs_list": [ 00:15:20.718 { 00:15:20.718 "name": null, 00:15:20.718 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:20.718 "is_configured": false, 00:15:20.718 "data_offset": 0, 00:15:20.718 "data_size": 65536 00:15:20.718 }, 00:15:20.718 { 00:15:20.718 "name": null, 00:15:20.718 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:20.718 "is_configured": false, 00:15:20.718 "data_offset": 0, 00:15:20.718 "data_size": 65536 00:15:20.718 }, 00:15:20.718 { 00:15:20.718 "name": "BaseBdev3", 00:15:20.718 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:20.718 "is_configured": true, 00:15:20.718 "data_offset": 0, 00:15:20.718 "data_size": 65536 00:15:20.718 } 00:15:20.718 ] 00:15:20.718 }' 00:15:20.718 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.718 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.977 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.977 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.977 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.977 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.977 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.237 [2024-11-08 16:56:50.514104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.237 "name": "Existed_Raid", 00:15:21.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.237 "strip_size_kb": 64, 00:15:21.237 "state": "configuring", 00:15:21.237 "raid_level": "raid5f", 00:15:21.237 "superblock": false, 00:15:21.237 "num_base_bdevs": 3, 00:15:21.237 "num_base_bdevs_discovered": 2, 00:15:21.237 "num_base_bdevs_operational": 3, 00:15:21.237 "base_bdevs_list": [ 00:15:21.237 { 00:15:21.237 "name": null, 00:15:21.237 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:21.237 "is_configured": false, 00:15:21.237 "data_offset": 0, 00:15:21.237 "data_size": 65536 00:15:21.237 }, 00:15:21.237 { 00:15:21.237 "name": "BaseBdev2", 00:15:21.237 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:21.237 "is_configured": true, 00:15:21.237 "data_offset": 0, 00:15:21.237 "data_size": 65536 00:15:21.237 }, 00:15:21.237 { 00:15:21.237 "name": "BaseBdev3", 00:15:21.237 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:21.237 "is_configured": true, 00:15:21.237 "data_offset": 0, 00:15:21.237 "data_size": 65536 00:15:21.237 } 00:15:21.237 ] 00:15:21.237 }' 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.237 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.497 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.497 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.497 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.497 16:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:21.497 16:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1f70521-a938-44bd-afac-758b82573f33 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.757 [2024-11-08 16:56:51.088493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:21.757 [2024-11-08 16:56:51.088645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:21.757 [2024-11-08 16:56:51.088696] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:21.757 [2024-11-08 16:56:51.089024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:21.757 [2024-11-08 16:56:51.089528] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:21.757 [2024-11-08 16:56:51.089579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:15:21.757 [2024-11-08 16:56:51.089833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.757 NewBaseBdev 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.757 [ 00:15:21.757 { 00:15:21.757 "name": "NewBaseBdev", 00:15:21.757 "aliases": [ 00:15:21.757 "c1f70521-a938-44bd-afac-758b82573f33" 00:15:21.757 ], 00:15:21.757 "product_name": "Malloc disk", 00:15:21.757 "block_size": 512, 00:15:21.757 "num_blocks": 65536, 00:15:21.757 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:21.757 "assigned_rate_limits": { 00:15:21.757 "rw_ios_per_sec": 0, 00:15:21.757 "rw_mbytes_per_sec": 0, 00:15:21.757 "r_mbytes_per_sec": 0, 00:15:21.757 "w_mbytes_per_sec": 0 00:15:21.757 }, 00:15:21.757 "claimed": true, 00:15:21.757 "claim_type": "exclusive_write", 00:15:21.757 "zoned": false, 00:15:21.757 "supported_io_types": { 00:15:21.757 "read": true, 00:15:21.757 "write": true, 00:15:21.757 "unmap": true, 00:15:21.757 "flush": true, 00:15:21.757 "reset": true, 00:15:21.757 "nvme_admin": false, 00:15:21.757 "nvme_io": false, 00:15:21.757 "nvme_io_md": false, 00:15:21.757 "write_zeroes": true, 00:15:21.757 "zcopy": true, 00:15:21.757 "get_zone_info": false, 00:15:21.757 "zone_management": false, 00:15:21.757 "zone_append": false, 00:15:21.757 "compare": false, 00:15:21.757 "compare_and_write": false, 00:15:21.757 "abort": true, 00:15:21.757 "seek_hole": false, 00:15:21.757 "seek_data": false, 00:15:21.757 "copy": true, 00:15:21.757 "nvme_iov_md": false 00:15:21.757 }, 00:15:21.757 "memory_domains": [ 00:15:21.757 { 00:15:21.757 "dma_device_id": "system", 00:15:21.757 "dma_device_type": 1 00:15:21.757 }, 00:15:21.757 { 00:15:21.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.757 "dma_device_type": 2 00:15:21.757 } 00:15:21.757 ], 00:15:21.757 "driver_specific": {} 00:15:21.757 } 00:15:21.757 ] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.757 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.758 "name": "Existed_Raid", 00:15:21.758 "uuid": "08b62d11-40aa-473c-ba3d-1f52152cf5bb", 00:15:21.758 "strip_size_kb": 64, 00:15:21.758 "state": "online", 00:15:21.758 "raid_level": "raid5f", 00:15:21.758 "superblock": false, 00:15:21.758 "num_base_bdevs": 3, 00:15:21.758 "num_base_bdevs_discovered": 3, 00:15:21.758 "num_base_bdevs_operational": 3, 00:15:21.758 "base_bdevs_list": [ 00:15:21.758 { 00:15:21.758 "name": "NewBaseBdev", 00:15:21.758 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:21.758 "is_configured": true, 00:15:21.758 "data_offset": 0, 00:15:21.758 "data_size": 65536 00:15:21.758 }, 00:15:21.758 { 00:15:21.758 "name": "BaseBdev2", 00:15:21.758 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:21.758 "is_configured": true, 00:15:21.758 "data_offset": 0, 00:15:21.758 "data_size": 65536 00:15:21.758 }, 00:15:21.758 { 00:15:21.758 "name": "BaseBdev3", 00:15:21.758 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:21.758 "is_configured": true, 00:15:21.758 "data_offset": 0, 00:15:21.758 "data_size": 65536 00:15:21.758 } 00:15:21.758 ] 00:15:21.758 }' 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.758 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.326 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.326 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.326 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.326 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.327 [2024-11-08 16:56:51.568024] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.327 "name": "Existed_Raid", 00:15:22.327 "aliases": [ 00:15:22.327 "08b62d11-40aa-473c-ba3d-1f52152cf5bb" 00:15:22.327 ], 00:15:22.327 "product_name": "Raid Volume", 00:15:22.327 "block_size": 512, 00:15:22.327 "num_blocks": 131072, 00:15:22.327 "uuid": "08b62d11-40aa-473c-ba3d-1f52152cf5bb", 00:15:22.327 "assigned_rate_limits": { 00:15:22.327 "rw_ios_per_sec": 0, 00:15:22.327 "rw_mbytes_per_sec": 0, 00:15:22.327 "r_mbytes_per_sec": 0, 00:15:22.327 "w_mbytes_per_sec": 0 00:15:22.327 }, 00:15:22.327 "claimed": false, 00:15:22.327 "zoned": false, 00:15:22.327 "supported_io_types": { 00:15:22.327 "read": true, 00:15:22.327 "write": true, 00:15:22.327 "unmap": false, 00:15:22.327 "flush": false, 00:15:22.327 "reset": true, 00:15:22.327 "nvme_admin": false, 00:15:22.327 "nvme_io": false, 00:15:22.327 "nvme_io_md": false, 00:15:22.327 "write_zeroes": true, 00:15:22.327 "zcopy": false, 00:15:22.327 "get_zone_info": false, 00:15:22.327 "zone_management": false, 00:15:22.327 "zone_append": false, 00:15:22.327 "compare": false, 00:15:22.327 "compare_and_write": false, 00:15:22.327 "abort": false, 00:15:22.327 "seek_hole": false, 00:15:22.327 "seek_data": false, 00:15:22.327 "copy": false, 00:15:22.327 "nvme_iov_md": false 00:15:22.327 }, 00:15:22.327 "driver_specific": { 00:15:22.327 "raid": { 00:15:22.327 "uuid": "08b62d11-40aa-473c-ba3d-1f52152cf5bb", 00:15:22.327 "strip_size_kb": 64, 00:15:22.327 "state": "online", 00:15:22.327 "raid_level": "raid5f", 00:15:22.327 "superblock": false, 00:15:22.327 "num_base_bdevs": 3, 00:15:22.327 "num_base_bdevs_discovered": 3, 00:15:22.327 "num_base_bdevs_operational": 3, 00:15:22.327 "base_bdevs_list": [ 00:15:22.327 { 00:15:22.327 "name": "NewBaseBdev", 00:15:22.327 "uuid": "c1f70521-a938-44bd-afac-758b82573f33", 00:15:22.327 "is_configured": true, 00:15:22.327 "data_offset": 0, 00:15:22.327 "data_size": 65536 00:15:22.327 }, 00:15:22.327 { 00:15:22.327 "name": "BaseBdev2", 00:15:22.327 "uuid": "14c06d8a-3745-44a3-b955-b37d10f16321", 00:15:22.327 "is_configured": true, 00:15:22.327 "data_offset": 0, 00:15:22.327 "data_size": 65536 00:15:22.327 }, 00:15:22.327 { 00:15:22.327 "name": "BaseBdev3", 00:15:22.327 "uuid": "fd829c60-465b-45f8-a95f-19bd5141c8d5", 00:15:22.327 "is_configured": true, 00:15:22.327 "data_offset": 0, 00:15:22.327 "data_size": 65536 00:15:22.327 } 00:15:22.327 ] 00:15:22.327 } 00:15:22.327 } 00:15:22.327 }' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:22.327 BaseBdev2 00:15:22.327 BaseBdev3' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.327 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.588 [2024-11-08 16:56:51.871368] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.588 [2024-11-08 16:56:51.871459] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.588 [2024-11-08 16:56:51.871581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.588 [2024-11-08 16:56:51.871909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.588 [2024-11-08 16:56:51.871977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90540 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90540 ']' 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90540 00:15:22.588 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90540 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.589 killing process with pid 90540 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90540' 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90540 00:15:22.589 [2024-11-08 16:56:51.920906] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.589 16:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90540 00:15:22.589 [2024-11-08 16:56:51.951512] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:22.848 00:15:22.848 real 0m9.201s 00:15:22.848 user 0m15.716s 00:15:22.848 sys 0m1.933s 00:15:22.848 ************************************ 00:15:22.848 END TEST raid5f_state_function_test 00:15:22.848 ************************************ 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.848 16:56:52 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:22.848 16:56:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:22.848 16:56:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.848 16:56:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.848 ************************************ 00:15:22.848 START TEST raid5f_state_function_test_sb 00:15:22.848 ************************************ 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91145 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.848 Process raid pid: 91145 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91145' 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91145 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91145 ']' 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.848 16:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.848 [2024-11-08 16:56:52.367717] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:22.848 [2024-11-08 16:56:52.367854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.108 [2024-11-08 16:56:52.510721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.108 [2024-11-08 16:56:52.560320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.108 [2024-11-08 16:56:52.603378] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.108 [2024-11-08 16:56:52.603421] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.047 [2024-11-08 16:56:53.241351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.047 [2024-11-08 16:56:53.241504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.047 [2024-11-08 16:56:53.241527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.047 [2024-11-08 16:56:53.241541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.047 [2024-11-08 16:56:53.241548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.047 [2024-11-08 16:56:53.241562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.047 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.048 "name": "Existed_Raid", 00:15:24.048 "uuid": "1c36fde8-60bc-490e-8049-86d306e13cdb", 00:15:24.048 "strip_size_kb": 64, 00:15:24.048 "state": "configuring", 00:15:24.048 "raid_level": "raid5f", 00:15:24.048 "superblock": true, 00:15:24.048 "num_base_bdevs": 3, 00:15:24.048 "num_base_bdevs_discovered": 0, 00:15:24.048 "num_base_bdevs_operational": 3, 00:15:24.048 "base_bdevs_list": [ 00:15:24.048 { 00:15:24.048 "name": "BaseBdev1", 00:15:24.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.048 "is_configured": false, 00:15:24.048 "data_offset": 0, 00:15:24.048 "data_size": 0 00:15:24.048 }, 00:15:24.048 { 00:15:24.048 "name": "BaseBdev2", 00:15:24.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.048 "is_configured": false, 00:15:24.048 "data_offset": 0, 00:15:24.048 "data_size": 0 00:15:24.048 }, 00:15:24.048 { 00:15:24.048 "name": "BaseBdev3", 00:15:24.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.048 "is_configured": false, 00:15:24.048 "data_offset": 0, 00:15:24.048 "data_size": 0 00:15:24.048 } 00:15:24.048 ] 00:15:24.048 }' 00:15:24.048 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.048 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.307 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.307 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.307 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.307 [2024-11-08 16:56:53.668541] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.308 [2024-11-08 16:56:53.668655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.308 [2024-11-08 16:56:53.680541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.308 [2024-11-08 16:56:53.680584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.308 [2024-11-08 16:56:53.680593] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.308 [2024-11-08 16:56:53.680602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.308 [2024-11-08 16:56:53.680608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.308 [2024-11-08 16:56:53.680616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.308 [2024-11-08 16:56:53.701488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.308 BaseBdev1 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.308 [ 00:15:24.308 { 00:15:24.308 "name": "BaseBdev1", 00:15:24.308 "aliases": [ 00:15:24.308 "bd7f0982-5fed-46f4-9853-cf601391ea9d" 00:15:24.308 ], 00:15:24.308 "product_name": "Malloc disk", 00:15:24.308 "block_size": 512, 00:15:24.308 "num_blocks": 65536, 00:15:24.308 "uuid": "bd7f0982-5fed-46f4-9853-cf601391ea9d", 00:15:24.308 "assigned_rate_limits": { 00:15:24.308 "rw_ios_per_sec": 0, 00:15:24.308 "rw_mbytes_per_sec": 0, 00:15:24.308 "r_mbytes_per_sec": 0, 00:15:24.308 "w_mbytes_per_sec": 0 00:15:24.308 }, 00:15:24.308 "claimed": true, 00:15:24.308 "claim_type": "exclusive_write", 00:15:24.308 "zoned": false, 00:15:24.308 "supported_io_types": { 00:15:24.308 "read": true, 00:15:24.308 "write": true, 00:15:24.308 "unmap": true, 00:15:24.308 "flush": true, 00:15:24.308 "reset": true, 00:15:24.308 "nvme_admin": false, 00:15:24.308 "nvme_io": false, 00:15:24.308 "nvme_io_md": false, 00:15:24.308 "write_zeroes": true, 00:15:24.308 "zcopy": true, 00:15:24.308 "get_zone_info": false, 00:15:24.308 "zone_management": false, 00:15:24.308 "zone_append": false, 00:15:24.308 "compare": false, 00:15:24.308 "compare_and_write": false, 00:15:24.308 "abort": true, 00:15:24.308 "seek_hole": false, 00:15:24.308 "seek_data": false, 00:15:24.308 "copy": true, 00:15:24.308 "nvme_iov_md": false 00:15:24.308 }, 00:15:24.308 "memory_domains": [ 00:15:24.308 { 00:15:24.308 "dma_device_id": "system", 00:15:24.308 "dma_device_type": 1 00:15:24.308 }, 00:15:24.308 { 00:15:24.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.308 "dma_device_type": 2 00:15:24.308 } 00:15:24.308 ], 00:15:24.308 "driver_specific": {} 00:15:24.308 } 00:15:24.308 ] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.308 "name": "Existed_Raid", 00:15:24.308 "uuid": "d45e9e5c-23ce-432b-81ce-ff76ffb8dad1", 00:15:24.308 "strip_size_kb": 64, 00:15:24.308 "state": "configuring", 00:15:24.308 "raid_level": "raid5f", 00:15:24.308 "superblock": true, 00:15:24.308 "num_base_bdevs": 3, 00:15:24.308 "num_base_bdevs_discovered": 1, 00:15:24.308 "num_base_bdevs_operational": 3, 00:15:24.308 "base_bdevs_list": [ 00:15:24.308 { 00:15:24.308 "name": "BaseBdev1", 00:15:24.308 "uuid": "bd7f0982-5fed-46f4-9853-cf601391ea9d", 00:15:24.308 "is_configured": true, 00:15:24.308 "data_offset": 2048, 00:15:24.308 "data_size": 63488 00:15:24.308 }, 00:15:24.308 { 00:15:24.308 "name": "BaseBdev2", 00:15:24.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.308 "is_configured": false, 00:15:24.308 "data_offset": 0, 00:15:24.308 "data_size": 0 00:15:24.308 }, 00:15:24.308 { 00:15:24.308 "name": "BaseBdev3", 00:15:24.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.308 "is_configured": false, 00:15:24.308 "data_offset": 0, 00:15:24.308 "data_size": 0 00:15:24.308 } 00:15:24.308 ] 00:15:24.308 }' 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.308 16:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.879 [2024-11-08 16:56:54.192770] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.879 [2024-11-08 16:56:54.192836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.879 [2024-11-08 16:56:54.204780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.879 [2024-11-08 16:56:54.206715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.879 [2024-11-08 16:56:54.206802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.879 [2024-11-08 16:56:54.206816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.879 [2024-11-08 16:56:54.206827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.879 "name": "Existed_Raid", 00:15:24.879 "uuid": "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e", 00:15:24.879 "strip_size_kb": 64, 00:15:24.879 "state": "configuring", 00:15:24.879 "raid_level": "raid5f", 00:15:24.879 "superblock": true, 00:15:24.879 "num_base_bdevs": 3, 00:15:24.879 "num_base_bdevs_discovered": 1, 00:15:24.879 "num_base_bdevs_operational": 3, 00:15:24.879 "base_bdevs_list": [ 00:15:24.879 { 00:15:24.879 "name": "BaseBdev1", 00:15:24.879 "uuid": "bd7f0982-5fed-46f4-9853-cf601391ea9d", 00:15:24.879 "is_configured": true, 00:15:24.879 "data_offset": 2048, 00:15:24.879 "data_size": 63488 00:15:24.879 }, 00:15:24.879 { 00:15:24.879 "name": "BaseBdev2", 00:15:24.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.879 "is_configured": false, 00:15:24.879 "data_offset": 0, 00:15:24.879 "data_size": 0 00:15:24.879 }, 00:15:24.879 { 00:15:24.879 "name": "BaseBdev3", 00:15:24.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.879 "is_configured": false, 00:15:24.879 "data_offset": 0, 00:15:24.879 "data_size": 0 00:15:24.879 } 00:15:24.879 ] 00:15:24.879 }' 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.879 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.447 [2024-11-08 16:56:54.703203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.447 BaseBdev2 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:25.447 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.448 [ 00:15:25.448 { 00:15:25.448 "name": "BaseBdev2", 00:15:25.448 "aliases": [ 00:15:25.448 "e53c8800-f560-4806-9cb9-3838e165e945" 00:15:25.448 ], 00:15:25.448 "product_name": "Malloc disk", 00:15:25.448 "block_size": 512, 00:15:25.448 "num_blocks": 65536, 00:15:25.448 "uuid": "e53c8800-f560-4806-9cb9-3838e165e945", 00:15:25.448 "assigned_rate_limits": { 00:15:25.448 "rw_ios_per_sec": 0, 00:15:25.448 "rw_mbytes_per_sec": 0, 00:15:25.448 "r_mbytes_per_sec": 0, 00:15:25.448 "w_mbytes_per_sec": 0 00:15:25.448 }, 00:15:25.448 "claimed": true, 00:15:25.448 "claim_type": "exclusive_write", 00:15:25.448 "zoned": false, 00:15:25.448 "supported_io_types": { 00:15:25.448 "read": true, 00:15:25.448 "write": true, 00:15:25.448 "unmap": true, 00:15:25.448 "flush": true, 00:15:25.448 "reset": true, 00:15:25.448 "nvme_admin": false, 00:15:25.448 "nvme_io": false, 00:15:25.448 "nvme_io_md": false, 00:15:25.448 "write_zeroes": true, 00:15:25.448 "zcopy": true, 00:15:25.448 "get_zone_info": false, 00:15:25.448 "zone_management": false, 00:15:25.448 "zone_append": false, 00:15:25.448 "compare": false, 00:15:25.448 "compare_and_write": false, 00:15:25.448 "abort": true, 00:15:25.448 "seek_hole": false, 00:15:25.448 "seek_data": false, 00:15:25.448 "copy": true, 00:15:25.448 "nvme_iov_md": false 00:15:25.448 }, 00:15:25.448 "memory_domains": [ 00:15:25.448 { 00:15:25.448 "dma_device_id": "system", 00:15:25.448 "dma_device_type": 1 00:15:25.448 }, 00:15:25.448 { 00:15:25.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.448 "dma_device_type": 2 00:15:25.448 } 00:15:25.448 ], 00:15:25.448 "driver_specific": {} 00:15:25.448 } 00:15:25.448 ] 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.448 "name": "Existed_Raid", 00:15:25.448 "uuid": "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e", 00:15:25.448 "strip_size_kb": 64, 00:15:25.448 "state": "configuring", 00:15:25.448 "raid_level": "raid5f", 00:15:25.448 "superblock": true, 00:15:25.448 "num_base_bdevs": 3, 00:15:25.448 "num_base_bdevs_discovered": 2, 00:15:25.448 "num_base_bdevs_operational": 3, 00:15:25.448 "base_bdevs_list": [ 00:15:25.448 { 00:15:25.448 "name": "BaseBdev1", 00:15:25.448 "uuid": "bd7f0982-5fed-46f4-9853-cf601391ea9d", 00:15:25.448 "is_configured": true, 00:15:25.448 "data_offset": 2048, 00:15:25.448 "data_size": 63488 00:15:25.448 }, 00:15:25.448 { 00:15:25.448 "name": "BaseBdev2", 00:15:25.448 "uuid": "e53c8800-f560-4806-9cb9-3838e165e945", 00:15:25.448 "is_configured": true, 00:15:25.448 "data_offset": 2048, 00:15:25.448 "data_size": 63488 00:15:25.448 }, 00:15:25.448 { 00:15:25.448 "name": "BaseBdev3", 00:15:25.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.448 "is_configured": false, 00:15:25.448 "data_offset": 0, 00:15:25.448 "data_size": 0 00:15:25.448 } 00:15:25.448 ] 00:15:25.448 }' 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.448 16:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.707 [2024-11-08 16:56:55.225571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.707 [2024-11-08 16:56:55.225897] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:25.707 [2024-11-08 16:56:55.225920] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:25.707 [2024-11-08 16:56:55.226237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:25.707 BaseBdev3 00:15:25.707 [2024-11-08 16:56:55.226665] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:25.707 [2024-11-08 16:56:55.226682] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:25.707 [2024-11-08 16:56:55.226822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.707 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.967 [ 00:15:25.967 { 00:15:25.967 "name": "BaseBdev3", 00:15:25.967 "aliases": [ 00:15:25.967 "1e1164f3-1a3a-4934-93ed-5594c6234c5a" 00:15:25.967 ], 00:15:25.967 "product_name": "Malloc disk", 00:15:25.967 "block_size": 512, 00:15:25.967 "num_blocks": 65536, 00:15:25.967 "uuid": "1e1164f3-1a3a-4934-93ed-5594c6234c5a", 00:15:25.967 "assigned_rate_limits": { 00:15:25.967 "rw_ios_per_sec": 0, 00:15:25.967 "rw_mbytes_per_sec": 0, 00:15:25.967 "r_mbytes_per_sec": 0, 00:15:25.967 "w_mbytes_per_sec": 0 00:15:25.967 }, 00:15:25.967 "claimed": true, 00:15:25.967 "claim_type": "exclusive_write", 00:15:25.967 "zoned": false, 00:15:25.967 "supported_io_types": { 00:15:25.967 "read": true, 00:15:25.967 "write": true, 00:15:25.967 "unmap": true, 00:15:25.967 "flush": true, 00:15:25.967 "reset": true, 00:15:25.967 "nvme_admin": false, 00:15:25.967 "nvme_io": false, 00:15:25.967 "nvme_io_md": false, 00:15:25.967 "write_zeroes": true, 00:15:25.967 "zcopy": true, 00:15:25.967 "get_zone_info": false, 00:15:25.967 "zone_management": false, 00:15:25.967 "zone_append": false, 00:15:25.967 "compare": false, 00:15:25.967 "compare_and_write": false, 00:15:25.967 "abort": true, 00:15:25.967 "seek_hole": false, 00:15:25.967 "seek_data": false, 00:15:25.967 "copy": true, 00:15:25.967 "nvme_iov_md": false 00:15:25.967 }, 00:15:25.967 "memory_domains": [ 00:15:25.967 { 00:15:25.967 "dma_device_id": "system", 00:15:25.967 "dma_device_type": 1 00:15:25.967 }, 00:15:25.967 { 00:15:25.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.967 "dma_device_type": 2 00:15:25.967 } 00:15:25.967 ], 00:15:25.967 "driver_specific": {} 00:15:25.967 } 00:15:25.967 ] 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.967 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.967 "name": "Existed_Raid", 00:15:25.967 "uuid": "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e", 00:15:25.967 "strip_size_kb": 64, 00:15:25.967 "state": "online", 00:15:25.967 "raid_level": "raid5f", 00:15:25.967 "superblock": true, 00:15:25.967 "num_base_bdevs": 3, 00:15:25.967 "num_base_bdevs_discovered": 3, 00:15:25.967 "num_base_bdevs_operational": 3, 00:15:25.967 "base_bdevs_list": [ 00:15:25.967 { 00:15:25.968 "name": "BaseBdev1", 00:15:25.968 "uuid": "bd7f0982-5fed-46f4-9853-cf601391ea9d", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 }, 00:15:25.968 { 00:15:25.968 "name": "BaseBdev2", 00:15:25.968 "uuid": "e53c8800-f560-4806-9cb9-3838e165e945", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 }, 00:15:25.968 { 00:15:25.968 "name": "BaseBdev3", 00:15:25.968 "uuid": "1e1164f3-1a3a-4934-93ed-5594c6234c5a", 00:15:25.968 "is_configured": true, 00:15:25.968 "data_offset": 2048, 00:15:25.968 "data_size": 63488 00:15:25.968 } 00:15:25.968 ] 00:15:25.968 }' 00:15:25.968 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.968 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.227 [2024-11-08 16:56:55.701085] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.227 "name": "Existed_Raid", 00:15:26.227 "aliases": [ 00:15:26.227 "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e" 00:15:26.227 ], 00:15:26.227 "product_name": "Raid Volume", 00:15:26.227 "block_size": 512, 00:15:26.227 "num_blocks": 126976, 00:15:26.227 "uuid": "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e", 00:15:26.227 "assigned_rate_limits": { 00:15:26.227 "rw_ios_per_sec": 0, 00:15:26.227 "rw_mbytes_per_sec": 0, 00:15:26.227 "r_mbytes_per_sec": 0, 00:15:26.227 "w_mbytes_per_sec": 0 00:15:26.227 }, 00:15:26.227 "claimed": false, 00:15:26.227 "zoned": false, 00:15:26.227 "supported_io_types": { 00:15:26.227 "read": true, 00:15:26.227 "write": true, 00:15:26.227 "unmap": false, 00:15:26.227 "flush": false, 00:15:26.227 "reset": true, 00:15:26.227 "nvme_admin": false, 00:15:26.227 "nvme_io": false, 00:15:26.227 "nvme_io_md": false, 00:15:26.227 "write_zeroes": true, 00:15:26.227 "zcopy": false, 00:15:26.227 "get_zone_info": false, 00:15:26.227 "zone_management": false, 00:15:26.227 "zone_append": false, 00:15:26.227 "compare": false, 00:15:26.227 "compare_and_write": false, 00:15:26.227 "abort": false, 00:15:26.227 "seek_hole": false, 00:15:26.227 "seek_data": false, 00:15:26.227 "copy": false, 00:15:26.227 "nvme_iov_md": false 00:15:26.227 }, 00:15:26.227 "driver_specific": { 00:15:26.227 "raid": { 00:15:26.227 "uuid": "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e", 00:15:26.227 "strip_size_kb": 64, 00:15:26.227 "state": "online", 00:15:26.227 "raid_level": "raid5f", 00:15:26.227 "superblock": true, 00:15:26.227 "num_base_bdevs": 3, 00:15:26.227 "num_base_bdevs_discovered": 3, 00:15:26.227 "num_base_bdevs_operational": 3, 00:15:26.227 "base_bdevs_list": [ 00:15:26.227 { 00:15:26.227 "name": "BaseBdev1", 00:15:26.227 "uuid": "bd7f0982-5fed-46f4-9853-cf601391ea9d", 00:15:26.227 "is_configured": true, 00:15:26.227 "data_offset": 2048, 00:15:26.227 "data_size": 63488 00:15:26.227 }, 00:15:26.227 { 00:15:26.227 "name": "BaseBdev2", 00:15:26.227 "uuid": "e53c8800-f560-4806-9cb9-3838e165e945", 00:15:26.227 "is_configured": true, 00:15:26.227 "data_offset": 2048, 00:15:26.227 "data_size": 63488 00:15:26.227 }, 00:15:26.227 { 00:15:26.227 "name": "BaseBdev3", 00:15:26.227 "uuid": "1e1164f3-1a3a-4934-93ed-5594c6234c5a", 00:15:26.227 "is_configured": true, 00:15:26.227 "data_offset": 2048, 00:15:26.227 "data_size": 63488 00:15:26.227 } 00:15:26.227 ] 00:15:26.227 } 00:15:26.227 } 00:15:26.227 }' 00:15:26.227 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:26.486 BaseBdev2 00:15:26.486 BaseBdev3' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 [2024-11-08 16:56:55.964480] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.486 16:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.486 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.746 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.746 "name": "Existed_Raid", 00:15:26.746 "uuid": "5251d5f5-6ed6-481c-a5ed-3f0fa3b3de4e", 00:15:26.746 "strip_size_kb": 64, 00:15:26.746 "state": "online", 00:15:26.746 "raid_level": "raid5f", 00:15:26.746 "superblock": true, 00:15:26.746 "num_base_bdevs": 3, 00:15:26.746 "num_base_bdevs_discovered": 2, 00:15:26.746 "num_base_bdevs_operational": 2, 00:15:26.746 "base_bdevs_list": [ 00:15:26.746 { 00:15:26.746 "name": null, 00:15:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.746 "is_configured": false, 00:15:26.746 "data_offset": 0, 00:15:26.746 "data_size": 63488 00:15:26.746 }, 00:15:26.746 { 00:15:26.746 "name": "BaseBdev2", 00:15:26.746 "uuid": "e53c8800-f560-4806-9cb9-3838e165e945", 00:15:26.746 "is_configured": true, 00:15:26.746 "data_offset": 2048, 00:15:26.746 "data_size": 63488 00:15:26.746 }, 00:15:26.746 { 00:15:26.746 "name": "BaseBdev3", 00:15:26.746 "uuid": "1e1164f3-1a3a-4934-93ed-5594c6234c5a", 00:15:26.746 "is_configured": true, 00:15:26.746 "data_offset": 2048, 00:15:26.746 "data_size": 63488 00:15:26.746 } 00:15:26.746 ] 00:15:26.746 }' 00:15:26.746 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.746 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.005 [2024-11-08 16:56:56.483143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.005 [2024-11-08 16:56:56.483303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.005 [2024-11-08 16:56:56.494562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.005 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 [2024-11-08 16:56:56.554525] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.266 [2024-11-08 16:56:56.554640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 BaseBdev2 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 [ 00:15:27.266 { 00:15:27.266 "name": "BaseBdev2", 00:15:27.266 "aliases": [ 00:15:27.266 "48b21a08-9045-4df8-b515-1d9919e1ca8c" 00:15:27.266 ], 00:15:27.266 "product_name": "Malloc disk", 00:15:27.266 "block_size": 512, 00:15:27.266 "num_blocks": 65536, 00:15:27.266 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:27.266 "assigned_rate_limits": { 00:15:27.266 "rw_ios_per_sec": 0, 00:15:27.266 "rw_mbytes_per_sec": 0, 00:15:27.266 "r_mbytes_per_sec": 0, 00:15:27.266 "w_mbytes_per_sec": 0 00:15:27.266 }, 00:15:27.266 "claimed": false, 00:15:27.266 "zoned": false, 00:15:27.266 "supported_io_types": { 00:15:27.266 "read": true, 00:15:27.266 "write": true, 00:15:27.266 "unmap": true, 00:15:27.266 "flush": true, 00:15:27.266 "reset": true, 00:15:27.266 "nvme_admin": false, 00:15:27.266 "nvme_io": false, 00:15:27.266 "nvme_io_md": false, 00:15:27.266 "write_zeroes": true, 00:15:27.266 "zcopy": true, 00:15:27.266 "get_zone_info": false, 00:15:27.266 "zone_management": false, 00:15:27.266 "zone_append": false, 00:15:27.266 "compare": false, 00:15:27.266 "compare_and_write": false, 00:15:27.266 "abort": true, 00:15:27.266 "seek_hole": false, 00:15:27.266 "seek_data": false, 00:15:27.266 "copy": true, 00:15:27.266 "nvme_iov_md": false 00:15:27.266 }, 00:15:27.266 "memory_domains": [ 00:15:27.266 { 00:15:27.266 "dma_device_id": "system", 00:15:27.266 "dma_device_type": 1 00:15:27.266 }, 00:15:27.266 { 00:15:27.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.266 "dma_device_type": 2 00:15:27.266 } 00:15:27.266 ], 00:15:27.266 "driver_specific": {} 00:15:27.266 } 00:15:27.266 ] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 BaseBdev3 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.266 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.266 [ 00:15:27.266 { 00:15:27.266 "name": "BaseBdev3", 00:15:27.266 "aliases": [ 00:15:27.266 "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5" 00:15:27.266 ], 00:15:27.266 "product_name": "Malloc disk", 00:15:27.266 "block_size": 512, 00:15:27.266 "num_blocks": 65536, 00:15:27.266 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:27.266 "assigned_rate_limits": { 00:15:27.266 "rw_ios_per_sec": 0, 00:15:27.266 "rw_mbytes_per_sec": 0, 00:15:27.266 "r_mbytes_per_sec": 0, 00:15:27.266 "w_mbytes_per_sec": 0 00:15:27.266 }, 00:15:27.266 "claimed": false, 00:15:27.266 "zoned": false, 00:15:27.266 "supported_io_types": { 00:15:27.266 "read": true, 00:15:27.266 "write": true, 00:15:27.266 "unmap": true, 00:15:27.266 "flush": true, 00:15:27.266 "reset": true, 00:15:27.266 "nvme_admin": false, 00:15:27.266 "nvme_io": false, 00:15:27.266 "nvme_io_md": false, 00:15:27.266 "write_zeroes": true, 00:15:27.266 "zcopy": true, 00:15:27.266 "get_zone_info": false, 00:15:27.266 "zone_management": false, 00:15:27.266 "zone_append": false, 00:15:27.266 "compare": false, 00:15:27.266 "compare_and_write": false, 00:15:27.266 "abort": true, 00:15:27.266 "seek_hole": false, 00:15:27.266 "seek_data": false, 00:15:27.267 "copy": true, 00:15:27.267 "nvme_iov_md": false 00:15:27.267 }, 00:15:27.267 "memory_domains": [ 00:15:27.267 { 00:15:27.267 "dma_device_id": "system", 00:15:27.267 "dma_device_type": 1 00:15:27.267 }, 00:15:27.267 { 00:15:27.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.267 "dma_device_type": 2 00:15:27.267 } 00:15:27.267 ], 00:15:27.267 "driver_specific": {} 00:15:27.267 } 00:15:27.267 ] 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.267 [2024-11-08 16:56:56.735601] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.267 [2024-11-08 16:56:56.735736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.267 [2024-11-08 16:56:56.735798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.267 [2024-11-08 16:56:56.737984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.267 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.526 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.526 "name": "Existed_Raid", 00:15:27.526 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:27.526 "strip_size_kb": 64, 00:15:27.526 "state": "configuring", 00:15:27.526 "raid_level": "raid5f", 00:15:27.526 "superblock": true, 00:15:27.526 "num_base_bdevs": 3, 00:15:27.526 "num_base_bdevs_discovered": 2, 00:15:27.526 "num_base_bdevs_operational": 3, 00:15:27.526 "base_bdevs_list": [ 00:15:27.526 { 00:15:27.526 "name": "BaseBdev1", 00:15:27.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.526 "is_configured": false, 00:15:27.526 "data_offset": 0, 00:15:27.526 "data_size": 0 00:15:27.526 }, 00:15:27.526 { 00:15:27.526 "name": "BaseBdev2", 00:15:27.526 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:27.526 "is_configured": true, 00:15:27.526 "data_offset": 2048, 00:15:27.526 "data_size": 63488 00:15:27.526 }, 00:15:27.526 { 00:15:27.526 "name": "BaseBdev3", 00:15:27.526 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:27.526 "is_configured": true, 00:15:27.526 "data_offset": 2048, 00:15:27.526 "data_size": 63488 00:15:27.526 } 00:15:27.526 ] 00:15:27.526 }' 00:15:27.526 16:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.526 16:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.825 [2024-11-08 16:56:57.202811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.825 "name": "Existed_Raid", 00:15:27.825 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:27.825 "strip_size_kb": 64, 00:15:27.825 "state": "configuring", 00:15:27.825 "raid_level": "raid5f", 00:15:27.825 "superblock": true, 00:15:27.825 "num_base_bdevs": 3, 00:15:27.825 "num_base_bdevs_discovered": 1, 00:15:27.825 "num_base_bdevs_operational": 3, 00:15:27.825 "base_bdevs_list": [ 00:15:27.825 { 00:15:27.825 "name": "BaseBdev1", 00:15:27.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.825 "is_configured": false, 00:15:27.825 "data_offset": 0, 00:15:27.825 "data_size": 0 00:15:27.825 }, 00:15:27.825 { 00:15:27.825 "name": null, 00:15:27.825 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:27.825 "is_configured": false, 00:15:27.825 "data_offset": 0, 00:15:27.825 "data_size": 63488 00:15:27.825 }, 00:15:27.825 { 00:15:27.825 "name": "BaseBdev3", 00:15:27.825 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:27.825 "is_configured": true, 00:15:27.825 "data_offset": 2048, 00:15:27.825 "data_size": 63488 00:15:27.825 } 00:15:27.825 ] 00:15:27.825 }' 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.825 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 BaseBdev1 00:15:28.412 [2024-11-08 16:56:57.737009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.412 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.412 [ 00:15:28.412 { 00:15:28.412 "name": "BaseBdev1", 00:15:28.412 "aliases": [ 00:15:28.412 "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3" 00:15:28.412 ], 00:15:28.412 "product_name": "Malloc disk", 00:15:28.412 "block_size": 512, 00:15:28.412 "num_blocks": 65536, 00:15:28.412 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:28.412 "assigned_rate_limits": { 00:15:28.412 "rw_ios_per_sec": 0, 00:15:28.412 "rw_mbytes_per_sec": 0, 00:15:28.412 "r_mbytes_per_sec": 0, 00:15:28.412 "w_mbytes_per_sec": 0 00:15:28.412 }, 00:15:28.412 "claimed": true, 00:15:28.412 "claim_type": "exclusive_write", 00:15:28.412 "zoned": false, 00:15:28.412 "supported_io_types": { 00:15:28.412 "read": true, 00:15:28.412 "write": true, 00:15:28.412 "unmap": true, 00:15:28.412 "flush": true, 00:15:28.412 "reset": true, 00:15:28.412 "nvme_admin": false, 00:15:28.413 "nvme_io": false, 00:15:28.413 "nvme_io_md": false, 00:15:28.413 "write_zeroes": true, 00:15:28.413 "zcopy": true, 00:15:28.413 "get_zone_info": false, 00:15:28.413 "zone_management": false, 00:15:28.413 "zone_append": false, 00:15:28.413 "compare": false, 00:15:28.413 "compare_and_write": false, 00:15:28.413 "abort": true, 00:15:28.413 "seek_hole": false, 00:15:28.413 "seek_data": false, 00:15:28.413 "copy": true, 00:15:28.413 "nvme_iov_md": false 00:15:28.413 }, 00:15:28.413 "memory_domains": [ 00:15:28.413 { 00:15:28.413 "dma_device_id": "system", 00:15:28.413 "dma_device_type": 1 00:15:28.413 }, 00:15:28.413 { 00:15:28.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.413 "dma_device_type": 2 00:15:28.413 } 00:15:28.413 ], 00:15:28.413 "driver_specific": {} 00:15:28.413 } 00:15:28.413 ] 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.413 "name": "Existed_Raid", 00:15:28.413 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:28.413 "strip_size_kb": 64, 00:15:28.413 "state": "configuring", 00:15:28.413 "raid_level": "raid5f", 00:15:28.413 "superblock": true, 00:15:28.413 "num_base_bdevs": 3, 00:15:28.413 "num_base_bdevs_discovered": 2, 00:15:28.413 "num_base_bdevs_operational": 3, 00:15:28.413 "base_bdevs_list": [ 00:15:28.413 { 00:15:28.413 "name": "BaseBdev1", 00:15:28.413 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:28.413 "is_configured": true, 00:15:28.413 "data_offset": 2048, 00:15:28.413 "data_size": 63488 00:15:28.413 }, 00:15:28.413 { 00:15:28.413 "name": null, 00:15:28.413 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:28.413 "is_configured": false, 00:15:28.413 "data_offset": 0, 00:15:28.413 "data_size": 63488 00:15:28.413 }, 00:15:28.413 { 00:15:28.413 "name": "BaseBdev3", 00:15:28.413 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:28.413 "is_configured": true, 00:15:28.413 "data_offset": 2048, 00:15:28.413 "data_size": 63488 00:15:28.413 } 00:15:28.413 ] 00:15:28.413 }' 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.413 16:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.981 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.981 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.982 [2024-11-08 16:56:58.276180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.982 "name": "Existed_Raid", 00:15:28.982 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:28.982 "strip_size_kb": 64, 00:15:28.982 "state": "configuring", 00:15:28.982 "raid_level": "raid5f", 00:15:28.982 "superblock": true, 00:15:28.982 "num_base_bdevs": 3, 00:15:28.982 "num_base_bdevs_discovered": 1, 00:15:28.982 "num_base_bdevs_operational": 3, 00:15:28.982 "base_bdevs_list": [ 00:15:28.982 { 00:15:28.982 "name": "BaseBdev1", 00:15:28.982 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:28.982 "is_configured": true, 00:15:28.982 "data_offset": 2048, 00:15:28.982 "data_size": 63488 00:15:28.982 }, 00:15:28.982 { 00:15:28.982 "name": null, 00:15:28.982 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:28.982 "is_configured": false, 00:15:28.982 "data_offset": 0, 00:15:28.982 "data_size": 63488 00:15:28.982 }, 00:15:28.982 { 00:15:28.982 "name": null, 00:15:28.982 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:28.982 "is_configured": false, 00:15:28.982 "data_offset": 0, 00:15:28.982 "data_size": 63488 00:15:28.982 } 00:15:28.982 ] 00:15:28.982 }' 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.982 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.241 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.241 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.241 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.241 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.241 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.500 [2024-11-08 16:56:58.783382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.500 "name": "Existed_Raid", 00:15:29.500 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:29.500 "strip_size_kb": 64, 00:15:29.500 "state": "configuring", 00:15:29.500 "raid_level": "raid5f", 00:15:29.500 "superblock": true, 00:15:29.500 "num_base_bdevs": 3, 00:15:29.500 "num_base_bdevs_discovered": 2, 00:15:29.500 "num_base_bdevs_operational": 3, 00:15:29.500 "base_bdevs_list": [ 00:15:29.500 { 00:15:29.500 "name": "BaseBdev1", 00:15:29.500 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:29.500 "is_configured": true, 00:15:29.500 "data_offset": 2048, 00:15:29.500 "data_size": 63488 00:15:29.500 }, 00:15:29.500 { 00:15:29.500 "name": null, 00:15:29.500 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:29.500 "is_configured": false, 00:15:29.500 "data_offset": 0, 00:15:29.500 "data_size": 63488 00:15:29.500 }, 00:15:29.500 { 00:15:29.500 "name": "BaseBdev3", 00:15:29.500 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:29.500 "is_configured": true, 00:15:29.500 "data_offset": 2048, 00:15:29.500 "data_size": 63488 00:15:29.500 } 00:15:29.500 ] 00:15:29.500 }' 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.500 16:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.760 [2024-11-08 16:56:59.270561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.760 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.019 "name": "Existed_Raid", 00:15:30.019 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:30.019 "strip_size_kb": 64, 00:15:30.019 "state": "configuring", 00:15:30.019 "raid_level": "raid5f", 00:15:30.019 "superblock": true, 00:15:30.019 "num_base_bdevs": 3, 00:15:30.019 "num_base_bdevs_discovered": 1, 00:15:30.019 "num_base_bdevs_operational": 3, 00:15:30.019 "base_bdevs_list": [ 00:15:30.019 { 00:15:30.019 "name": null, 00:15:30.019 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:30.019 "is_configured": false, 00:15:30.019 "data_offset": 0, 00:15:30.019 "data_size": 63488 00:15:30.019 }, 00:15:30.019 { 00:15:30.019 "name": null, 00:15:30.019 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:30.019 "is_configured": false, 00:15:30.019 "data_offset": 0, 00:15:30.019 "data_size": 63488 00:15:30.019 }, 00:15:30.019 { 00:15:30.019 "name": "BaseBdev3", 00:15:30.019 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:30.019 "is_configured": true, 00:15:30.019 "data_offset": 2048, 00:15:30.019 "data_size": 63488 00:15:30.019 } 00:15:30.019 ] 00:15:30.019 }' 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.019 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.279 [2024-11-08 16:56:59.772268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.279 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.538 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.538 "name": "Existed_Raid", 00:15:30.538 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:30.538 "strip_size_kb": 64, 00:15:30.538 "state": "configuring", 00:15:30.538 "raid_level": "raid5f", 00:15:30.538 "superblock": true, 00:15:30.538 "num_base_bdevs": 3, 00:15:30.538 "num_base_bdevs_discovered": 2, 00:15:30.538 "num_base_bdevs_operational": 3, 00:15:30.538 "base_bdevs_list": [ 00:15:30.538 { 00:15:30.538 "name": null, 00:15:30.538 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:30.538 "is_configured": false, 00:15:30.538 "data_offset": 0, 00:15:30.538 "data_size": 63488 00:15:30.539 }, 00:15:30.539 { 00:15:30.539 "name": "BaseBdev2", 00:15:30.539 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:30.539 "is_configured": true, 00:15:30.539 "data_offset": 2048, 00:15:30.539 "data_size": 63488 00:15:30.539 }, 00:15:30.539 { 00:15:30.539 "name": "BaseBdev3", 00:15:30.539 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:30.539 "is_configured": true, 00:15:30.539 "data_offset": 2048, 00:15:30.539 "data_size": 63488 00:15:30.539 } 00:15:30.539 ] 00:15:30.539 }' 00:15:30.539 16:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.539 16:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.798 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 25d3f87c-c344-4da9-86ec-ff0b79ccb6b3 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.058 [2024-11-08 16:57:00.342510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:31.058 [2024-11-08 16:57:00.342746] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:31.058 [2024-11-08 16:57:00.342768] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.058 [2024-11-08 16:57:00.343043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:31.058 NewBaseBdev 00:15:31.058 [2024-11-08 16:57:00.343540] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:31.058 [2024-11-08 16:57:00.343560] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:15:31.058 [2024-11-08 16:57:00.343687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.058 [ 00:15:31.058 { 00:15:31.058 "name": "NewBaseBdev", 00:15:31.058 "aliases": [ 00:15:31.058 "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3" 00:15:31.058 ], 00:15:31.058 "product_name": "Malloc disk", 00:15:31.058 "block_size": 512, 00:15:31.058 "num_blocks": 65536, 00:15:31.058 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:31.058 "assigned_rate_limits": { 00:15:31.058 "rw_ios_per_sec": 0, 00:15:31.058 "rw_mbytes_per_sec": 0, 00:15:31.058 "r_mbytes_per_sec": 0, 00:15:31.058 "w_mbytes_per_sec": 0 00:15:31.058 }, 00:15:31.058 "claimed": true, 00:15:31.058 "claim_type": "exclusive_write", 00:15:31.058 "zoned": false, 00:15:31.058 "supported_io_types": { 00:15:31.058 "read": true, 00:15:31.058 "write": true, 00:15:31.058 "unmap": true, 00:15:31.058 "flush": true, 00:15:31.058 "reset": true, 00:15:31.058 "nvme_admin": false, 00:15:31.058 "nvme_io": false, 00:15:31.058 "nvme_io_md": false, 00:15:31.058 "write_zeroes": true, 00:15:31.058 "zcopy": true, 00:15:31.058 "get_zone_info": false, 00:15:31.058 "zone_management": false, 00:15:31.058 "zone_append": false, 00:15:31.058 "compare": false, 00:15:31.058 "compare_and_write": false, 00:15:31.058 "abort": true, 00:15:31.058 "seek_hole": false, 00:15:31.058 "seek_data": false, 00:15:31.058 "copy": true, 00:15:31.058 "nvme_iov_md": false 00:15:31.058 }, 00:15:31.058 "memory_domains": [ 00:15:31.058 { 00:15:31.058 "dma_device_id": "system", 00:15:31.058 "dma_device_type": 1 00:15:31.058 }, 00:15:31.058 { 00:15:31.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.058 "dma_device_type": 2 00:15:31.058 } 00:15:31.058 ], 00:15:31.058 "driver_specific": {} 00:15:31.058 } 00:15:31.058 ] 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.058 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.059 "name": "Existed_Raid", 00:15:31.059 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:31.059 "strip_size_kb": 64, 00:15:31.059 "state": "online", 00:15:31.059 "raid_level": "raid5f", 00:15:31.059 "superblock": true, 00:15:31.059 "num_base_bdevs": 3, 00:15:31.059 "num_base_bdevs_discovered": 3, 00:15:31.059 "num_base_bdevs_operational": 3, 00:15:31.059 "base_bdevs_list": [ 00:15:31.059 { 00:15:31.059 "name": "NewBaseBdev", 00:15:31.059 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:31.059 "is_configured": true, 00:15:31.059 "data_offset": 2048, 00:15:31.059 "data_size": 63488 00:15:31.059 }, 00:15:31.059 { 00:15:31.059 "name": "BaseBdev2", 00:15:31.059 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:31.059 "is_configured": true, 00:15:31.059 "data_offset": 2048, 00:15:31.059 "data_size": 63488 00:15:31.059 }, 00:15:31.059 { 00:15:31.059 "name": "BaseBdev3", 00:15:31.059 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:31.059 "is_configured": true, 00:15:31.059 "data_offset": 2048, 00:15:31.059 "data_size": 63488 00:15:31.059 } 00:15:31.059 ] 00:15:31.059 }' 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.059 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.628 [2024-11-08 16:57:00.857947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.628 "name": "Existed_Raid", 00:15:31.628 "aliases": [ 00:15:31.628 "325a2e42-08f1-42cd-bf83-14e2ca955467" 00:15:31.628 ], 00:15:31.628 "product_name": "Raid Volume", 00:15:31.628 "block_size": 512, 00:15:31.628 "num_blocks": 126976, 00:15:31.628 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:31.628 "assigned_rate_limits": { 00:15:31.628 "rw_ios_per_sec": 0, 00:15:31.628 "rw_mbytes_per_sec": 0, 00:15:31.628 "r_mbytes_per_sec": 0, 00:15:31.628 "w_mbytes_per_sec": 0 00:15:31.628 }, 00:15:31.628 "claimed": false, 00:15:31.628 "zoned": false, 00:15:31.628 "supported_io_types": { 00:15:31.628 "read": true, 00:15:31.628 "write": true, 00:15:31.628 "unmap": false, 00:15:31.628 "flush": false, 00:15:31.628 "reset": true, 00:15:31.628 "nvme_admin": false, 00:15:31.628 "nvme_io": false, 00:15:31.628 "nvme_io_md": false, 00:15:31.628 "write_zeroes": true, 00:15:31.628 "zcopy": false, 00:15:31.628 "get_zone_info": false, 00:15:31.628 "zone_management": false, 00:15:31.628 "zone_append": false, 00:15:31.628 "compare": false, 00:15:31.628 "compare_and_write": false, 00:15:31.628 "abort": false, 00:15:31.628 "seek_hole": false, 00:15:31.628 "seek_data": false, 00:15:31.628 "copy": false, 00:15:31.628 "nvme_iov_md": false 00:15:31.628 }, 00:15:31.628 "driver_specific": { 00:15:31.628 "raid": { 00:15:31.628 "uuid": "325a2e42-08f1-42cd-bf83-14e2ca955467", 00:15:31.628 "strip_size_kb": 64, 00:15:31.628 "state": "online", 00:15:31.628 "raid_level": "raid5f", 00:15:31.628 "superblock": true, 00:15:31.628 "num_base_bdevs": 3, 00:15:31.628 "num_base_bdevs_discovered": 3, 00:15:31.628 "num_base_bdevs_operational": 3, 00:15:31.628 "base_bdevs_list": [ 00:15:31.628 { 00:15:31.628 "name": "NewBaseBdev", 00:15:31.628 "uuid": "25d3f87c-c344-4da9-86ec-ff0b79ccb6b3", 00:15:31.628 "is_configured": true, 00:15:31.628 "data_offset": 2048, 00:15:31.628 "data_size": 63488 00:15:31.628 }, 00:15:31.628 { 00:15:31.628 "name": "BaseBdev2", 00:15:31.628 "uuid": "48b21a08-9045-4df8-b515-1d9919e1ca8c", 00:15:31.628 "is_configured": true, 00:15:31.628 "data_offset": 2048, 00:15:31.628 "data_size": 63488 00:15:31.628 }, 00:15:31.628 { 00:15:31.628 "name": "BaseBdev3", 00:15:31.628 "uuid": "4f02c988-5dd9-44b6-ad32-f1d62cb3f5b5", 00:15:31.628 "is_configured": true, 00:15:31.628 "data_offset": 2048, 00:15:31.628 "data_size": 63488 00:15:31.628 } 00:15:31.628 ] 00:15:31.628 } 00:15:31.628 } 00:15:31.628 }' 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:31.628 BaseBdev2 00:15:31.628 BaseBdev3' 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.628 16:57:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.628 [2024-11-08 16:57:01.145249] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.628 [2024-11-08 16:57:01.145335] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.628 [2024-11-08 16:57:01.145437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.628 [2024-11-08 16:57:01.145754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.628 [2024-11-08 16:57:01.145781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91145 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91145 ']' 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91145 00:15:31.628 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91145 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91145' 00:15:31.888 killing process with pid 91145 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91145 00:15:31.888 [2024-11-08 16:57:01.174512] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.888 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91145 00:15:31.888 [2024-11-08 16:57:01.206402] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.148 16:57:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:32.148 ************************************ 00:15:32.148 END TEST raid5f_state_function_test_sb 00:15:32.148 ************************************ 00:15:32.148 00:15:32.148 real 0m9.186s 00:15:32.148 user 0m15.667s 00:15:32.148 sys 0m1.914s 00:15:32.148 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.148 16:57:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.148 16:57:01 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:32.148 16:57:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:32.148 16:57:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.148 16:57:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.148 ************************************ 00:15:32.148 START TEST raid5f_superblock_test 00:15:32.148 ************************************ 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91749 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91749 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91749 ']' 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.148 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.149 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.149 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.149 16:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.149 [2024-11-08 16:57:01.615374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:32.149 [2024-11-08 16:57:01.615530] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91749 ] 00:15:32.408 [2024-11-08 16:57:01.758450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.408 [2024-11-08 16:57:01.808889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.408 [2024-11-08 16:57:01.853226] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.408 [2024-11-08 16:57:01.853267] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:33.360 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 malloc1 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 [2024-11-08 16:57:02.548552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:33.361 [2024-11-08 16:57:02.548700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.361 [2024-11-08 16:57:02.548743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.361 [2024-11-08 16:57:02.548784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.361 [2024-11-08 16:57:02.551030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.361 [2024-11-08 16:57:02.551106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:33.361 pt1 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 malloc2 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 [2024-11-08 16:57:02.589186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.361 [2024-11-08 16:57:02.589252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.361 [2024-11-08 16:57:02.589270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.361 [2024-11-08 16:57:02.589283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.361 [2024-11-08 16:57:02.591788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.361 [2024-11-08 16:57:02.591890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.361 pt2 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 malloc3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 [2024-11-08 16:57:02.617979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:33.361 [2024-11-08 16:57:02.618086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.361 [2024-11-08 16:57:02.618124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.361 [2024-11-08 16:57:02.618158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.361 [2024-11-08 16:57:02.620561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.361 [2024-11-08 16:57:02.620651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:33.361 pt3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 [2024-11-08 16:57:02.630022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:33.361 [2024-11-08 16:57:02.632158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.361 [2024-11-08 16:57:02.632282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:33.361 [2024-11-08 16:57:02.632504] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:33.361 [2024-11-08 16:57:02.632555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.361 [2024-11-08 16:57:02.632879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:33.361 [2024-11-08 16:57:02.633338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:33.361 [2024-11-08 16:57:02.633390] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:33.361 [2024-11-08 16:57:02.633575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.361 "name": "raid_bdev1", 00:15:33.361 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:33.361 "strip_size_kb": 64, 00:15:33.361 "state": "online", 00:15:33.361 "raid_level": "raid5f", 00:15:33.361 "superblock": true, 00:15:33.361 "num_base_bdevs": 3, 00:15:33.361 "num_base_bdevs_discovered": 3, 00:15:33.361 "num_base_bdevs_operational": 3, 00:15:33.361 "base_bdevs_list": [ 00:15:33.361 { 00:15:33.361 "name": "pt1", 00:15:33.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.361 "is_configured": true, 00:15:33.361 "data_offset": 2048, 00:15:33.361 "data_size": 63488 00:15:33.361 }, 00:15:33.361 { 00:15:33.361 "name": "pt2", 00:15:33.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.361 "is_configured": true, 00:15:33.361 "data_offset": 2048, 00:15:33.361 "data_size": 63488 00:15:33.361 }, 00:15:33.361 { 00:15:33.361 "name": "pt3", 00:15:33.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.361 "is_configured": true, 00:15:33.361 "data_offset": 2048, 00:15:33.361 "data_size": 63488 00:15:33.361 } 00:15:33.361 ] 00:15:33.361 }' 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.361 16:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.620 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:33.620 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:33.620 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.621 [2024-11-08 16:57:03.098418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.621 "name": "raid_bdev1", 00:15:33.621 "aliases": [ 00:15:33.621 "85ebf1cd-7439-43a5-ae94-cecdf3dca970" 00:15:33.621 ], 00:15:33.621 "product_name": "Raid Volume", 00:15:33.621 "block_size": 512, 00:15:33.621 "num_blocks": 126976, 00:15:33.621 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:33.621 "assigned_rate_limits": { 00:15:33.621 "rw_ios_per_sec": 0, 00:15:33.621 "rw_mbytes_per_sec": 0, 00:15:33.621 "r_mbytes_per_sec": 0, 00:15:33.621 "w_mbytes_per_sec": 0 00:15:33.621 }, 00:15:33.621 "claimed": false, 00:15:33.621 "zoned": false, 00:15:33.621 "supported_io_types": { 00:15:33.621 "read": true, 00:15:33.621 "write": true, 00:15:33.621 "unmap": false, 00:15:33.621 "flush": false, 00:15:33.621 "reset": true, 00:15:33.621 "nvme_admin": false, 00:15:33.621 "nvme_io": false, 00:15:33.621 "nvme_io_md": false, 00:15:33.621 "write_zeroes": true, 00:15:33.621 "zcopy": false, 00:15:33.621 "get_zone_info": false, 00:15:33.621 "zone_management": false, 00:15:33.621 "zone_append": false, 00:15:33.621 "compare": false, 00:15:33.621 "compare_and_write": false, 00:15:33.621 "abort": false, 00:15:33.621 "seek_hole": false, 00:15:33.621 "seek_data": false, 00:15:33.621 "copy": false, 00:15:33.621 "nvme_iov_md": false 00:15:33.621 }, 00:15:33.621 "driver_specific": { 00:15:33.621 "raid": { 00:15:33.621 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:33.621 "strip_size_kb": 64, 00:15:33.621 "state": "online", 00:15:33.621 "raid_level": "raid5f", 00:15:33.621 "superblock": true, 00:15:33.621 "num_base_bdevs": 3, 00:15:33.621 "num_base_bdevs_discovered": 3, 00:15:33.621 "num_base_bdevs_operational": 3, 00:15:33.621 "base_bdevs_list": [ 00:15:33.621 { 00:15:33.621 "name": "pt1", 00:15:33.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.621 "is_configured": true, 00:15:33.621 "data_offset": 2048, 00:15:33.621 "data_size": 63488 00:15:33.621 }, 00:15:33.621 { 00:15:33.621 "name": "pt2", 00:15:33.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.621 "is_configured": true, 00:15:33.621 "data_offset": 2048, 00:15:33.621 "data_size": 63488 00:15:33.621 }, 00:15:33.621 { 00:15:33.621 "name": "pt3", 00:15:33.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.621 "is_configured": true, 00:15:33.621 "data_offset": 2048, 00:15:33.621 "data_size": 63488 00:15:33.621 } 00:15:33.621 ] 00:15:33.621 } 00:15:33.621 } 00:15:33.621 }' 00:15:33.621 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:33.879 pt2 00:15:33.879 pt3' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 [2024-11-08 16:57:03.377942] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.879 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=85ebf1cd-7439-43a5-ae94-cecdf3dca970 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 85ebf1cd-7439-43a5-ae94-cecdf3dca970 ']' 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.139 [2024-11-08 16:57:03.421621] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.139 [2024-11-08 16:57:03.421712] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.139 [2024-11-08 16:57:03.421885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.139 [2024-11-08 16:57:03.421991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.139 [2024-11-08 16:57:03.422007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.139 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.140 [2024-11-08 16:57:03.589365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:34.140 [2024-11-08 16:57:03.591500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:34.140 [2024-11-08 16:57:03.591620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:34.140 [2024-11-08 16:57:03.591734] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:34.140 [2024-11-08 16:57:03.591843] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:34.140 [2024-11-08 16:57:03.591916] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:34.140 [2024-11-08 16:57:03.591967] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.140 [2024-11-08 16:57:03.592006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:34.140 request: 00:15:34.140 { 00:15:34.140 "name": "raid_bdev1", 00:15:34.140 "raid_level": "raid5f", 00:15:34.140 "base_bdevs": [ 00:15:34.140 "malloc1", 00:15:34.140 "malloc2", 00:15:34.140 "malloc3" 00:15:34.140 ], 00:15:34.140 "strip_size_kb": 64, 00:15:34.140 "superblock": false, 00:15:34.140 "method": "bdev_raid_create", 00:15:34.140 "req_id": 1 00:15:34.140 } 00:15:34.140 Got JSON-RPC error response 00:15:34.140 response: 00:15:34.140 { 00:15:34.140 "code": -17, 00:15:34.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:34.140 } 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.140 [2024-11-08 16:57:03.653175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:34.140 [2024-11-08 16:57:03.653272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.140 [2024-11-08 16:57:03.653313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:34.140 [2024-11-08 16:57:03.653344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.140 [2024-11-08 16:57:03.655613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.140 [2024-11-08 16:57:03.655704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:34.140 [2024-11-08 16:57:03.655816] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:34.140 [2024-11-08 16:57:03.655889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:34.140 pt1 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.140 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.399 "name": "raid_bdev1", 00:15:34.399 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:34.399 "strip_size_kb": 64, 00:15:34.399 "state": "configuring", 00:15:34.399 "raid_level": "raid5f", 00:15:34.399 "superblock": true, 00:15:34.399 "num_base_bdevs": 3, 00:15:34.399 "num_base_bdevs_discovered": 1, 00:15:34.399 "num_base_bdevs_operational": 3, 00:15:34.399 "base_bdevs_list": [ 00:15:34.399 { 00:15:34.399 "name": "pt1", 00:15:34.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.399 "is_configured": true, 00:15:34.399 "data_offset": 2048, 00:15:34.399 "data_size": 63488 00:15:34.399 }, 00:15:34.399 { 00:15:34.399 "name": null, 00:15:34.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.399 "is_configured": false, 00:15:34.399 "data_offset": 2048, 00:15:34.399 "data_size": 63488 00:15:34.399 }, 00:15:34.399 { 00:15:34.399 "name": null, 00:15:34.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.399 "is_configured": false, 00:15:34.399 "data_offset": 2048, 00:15:34.399 "data_size": 63488 00:15:34.399 } 00:15:34.399 ] 00:15:34.399 }' 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.399 16:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.659 [2024-11-08 16:57:04.124471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.659 [2024-11-08 16:57:04.124569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.659 [2024-11-08 16:57:04.124592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:34.659 [2024-11-08 16:57:04.124606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.659 [2024-11-08 16:57:04.125062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.659 [2024-11-08 16:57:04.125091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.659 [2024-11-08 16:57:04.125177] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:34.659 [2024-11-08 16:57:04.125203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.659 pt2 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.659 [2024-11-08 16:57:04.136458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.659 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.660 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.920 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.920 "name": "raid_bdev1", 00:15:34.920 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:34.920 "strip_size_kb": 64, 00:15:34.920 "state": "configuring", 00:15:34.920 "raid_level": "raid5f", 00:15:34.920 "superblock": true, 00:15:34.920 "num_base_bdevs": 3, 00:15:34.920 "num_base_bdevs_discovered": 1, 00:15:34.920 "num_base_bdevs_operational": 3, 00:15:34.920 "base_bdevs_list": [ 00:15:34.920 { 00:15:34.920 "name": "pt1", 00:15:34.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.920 "is_configured": true, 00:15:34.920 "data_offset": 2048, 00:15:34.920 "data_size": 63488 00:15:34.920 }, 00:15:34.920 { 00:15:34.920 "name": null, 00:15:34.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.920 "is_configured": false, 00:15:34.920 "data_offset": 0, 00:15:34.920 "data_size": 63488 00:15:34.920 }, 00:15:34.920 { 00:15:34.920 "name": null, 00:15:34.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.920 "is_configured": false, 00:15:34.920 "data_offset": 2048, 00:15:34.920 "data_size": 63488 00:15:34.920 } 00:15:34.920 ] 00:15:34.920 }' 00:15:34.920 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.920 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.180 [2024-11-08 16:57:04.643557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.180 [2024-11-08 16:57:04.643723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.180 [2024-11-08 16:57:04.643778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:35.180 [2024-11-08 16:57:04.643821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.180 [2024-11-08 16:57:04.644305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.180 [2024-11-08 16:57:04.644332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.180 [2024-11-08 16:57:04.644419] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:35.180 [2024-11-08 16:57:04.644444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.180 pt2 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.180 [2024-11-08 16:57:04.655504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:35.180 [2024-11-08 16:57:04.655559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.180 [2024-11-08 16:57:04.655581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:35.180 [2024-11-08 16:57:04.655590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.180 [2024-11-08 16:57:04.656006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.180 [2024-11-08 16:57:04.656030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:35.180 [2024-11-08 16:57:04.656104] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:35.180 [2024-11-08 16:57:04.656125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:35.180 [2024-11-08 16:57:04.656237] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:35.180 [2024-11-08 16:57:04.656250] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:35.180 [2024-11-08 16:57:04.656513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:35.180 [2024-11-08 16:57:04.656978] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:35.180 [2024-11-08 16:57:04.656994] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:35.180 [2024-11-08 16:57:04.657108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.180 pt3 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.180 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.440 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.440 "name": "raid_bdev1", 00:15:35.440 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:35.440 "strip_size_kb": 64, 00:15:35.440 "state": "online", 00:15:35.440 "raid_level": "raid5f", 00:15:35.440 "superblock": true, 00:15:35.440 "num_base_bdevs": 3, 00:15:35.440 "num_base_bdevs_discovered": 3, 00:15:35.440 "num_base_bdevs_operational": 3, 00:15:35.440 "base_bdevs_list": [ 00:15:35.440 { 00:15:35.440 "name": "pt1", 00:15:35.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.440 "is_configured": true, 00:15:35.440 "data_offset": 2048, 00:15:35.440 "data_size": 63488 00:15:35.440 }, 00:15:35.440 { 00:15:35.440 "name": "pt2", 00:15:35.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.440 "is_configured": true, 00:15:35.440 "data_offset": 2048, 00:15:35.440 "data_size": 63488 00:15:35.440 }, 00:15:35.440 { 00:15:35.440 "name": "pt3", 00:15:35.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.440 "is_configured": true, 00:15:35.440 "data_offset": 2048, 00:15:35.440 "data_size": 63488 00:15:35.440 } 00:15:35.440 ] 00:15:35.440 }' 00:15:35.440 16:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.440 16:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 [2024-11-08 16:57:05.139014] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:35.700 "name": "raid_bdev1", 00:15:35.700 "aliases": [ 00:15:35.700 "85ebf1cd-7439-43a5-ae94-cecdf3dca970" 00:15:35.700 ], 00:15:35.700 "product_name": "Raid Volume", 00:15:35.700 "block_size": 512, 00:15:35.700 "num_blocks": 126976, 00:15:35.700 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:35.700 "assigned_rate_limits": { 00:15:35.700 "rw_ios_per_sec": 0, 00:15:35.700 "rw_mbytes_per_sec": 0, 00:15:35.700 "r_mbytes_per_sec": 0, 00:15:35.700 "w_mbytes_per_sec": 0 00:15:35.700 }, 00:15:35.700 "claimed": false, 00:15:35.700 "zoned": false, 00:15:35.700 "supported_io_types": { 00:15:35.700 "read": true, 00:15:35.700 "write": true, 00:15:35.700 "unmap": false, 00:15:35.700 "flush": false, 00:15:35.700 "reset": true, 00:15:35.700 "nvme_admin": false, 00:15:35.700 "nvme_io": false, 00:15:35.700 "nvme_io_md": false, 00:15:35.700 "write_zeroes": true, 00:15:35.700 "zcopy": false, 00:15:35.700 "get_zone_info": false, 00:15:35.700 "zone_management": false, 00:15:35.700 "zone_append": false, 00:15:35.700 "compare": false, 00:15:35.700 "compare_and_write": false, 00:15:35.700 "abort": false, 00:15:35.700 "seek_hole": false, 00:15:35.700 "seek_data": false, 00:15:35.700 "copy": false, 00:15:35.700 "nvme_iov_md": false 00:15:35.700 }, 00:15:35.700 "driver_specific": { 00:15:35.700 "raid": { 00:15:35.700 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:35.700 "strip_size_kb": 64, 00:15:35.700 "state": "online", 00:15:35.700 "raid_level": "raid5f", 00:15:35.700 "superblock": true, 00:15:35.700 "num_base_bdevs": 3, 00:15:35.700 "num_base_bdevs_discovered": 3, 00:15:35.700 "num_base_bdevs_operational": 3, 00:15:35.700 "base_bdevs_list": [ 00:15:35.700 { 00:15:35.700 "name": "pt1", 00:15:35.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.700 "is_configured": true, 00:15:35.700 "data_offset": 2048, 00:15:35.700 "data_size": 63488 00:15:35.700 }, 00:15:35.700 { 00:15:35.700 "name": "pt2", 00:15:35.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.700 "is_configured": true, 00:15:35.700 "data_offset": 2048, 00:15:35.700 "data_size": 63488 00:15:35.700 }, 00:15:35.700 { 00:15:35.700 "name": "pt3", 00:15:35.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.700 "is_configured": true, 00:15:35.700 "data_offset": 2048, 00:15:35.700 "data_size": 63488 00:15:35.700 } 00:15:35.700 ] 00:15:35.700 } 00:15:35.700 } 00:15:35.700 }' 00:15:35.700 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:35.960 pt2 00:15:35.960 pt3' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 [2024-11-08 16:57:05.442459] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 85ebf1cd-7439-43a5-ae94-cecdf3dca970 '!=' 85ebf1cd-7439-43a5-ae94-cecdf3dca970 ']' 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.960 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 [2024-11-08 16:57:05.486229] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.219 "name": "raid_bdev1", 00:15:36.219 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:36.219 "strip_size_kb": 64, 00:15:36.219 "state": "online", 00:15:36.219 "raid_level": "raid5f", 00:15:36.219 "superblock": true, 00:15:36.219 "num_base_bdevs": 3, 00:15:36.219 "num_base_bdevs_discovered": 2, 00:15:36.219 "num_base_bdevs_operational": 2, 00:15:36.219 "base_bdevs_list": [ 00:15:36.219 { 00:15:36.219 "name": null, 00:15:36.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.219 "is_configured": false, 00:15:36.219 "data_offset": 0, 00:15:36.219 "data_size": 63488 00:15:36.219 }, 00:15:36.219 { 00:15:36.219 "name": "pt2", 00:15:36.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.219 "is_configured": true, 00:15:36.219 "data_offset": 2048, 00:15:36.219 "data_size": 63488 00:15:36.219 }, 00:15:36.219 { 00:15:36.219 "name": "pt3", 00:15:36.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.219 "is_configured": true, 00:15:36.219 "data_offset": 2048, 00:15:36.219 "data_size": 63488 00:15:36.219 } 00:15:36.219 ] 00:15:36.219 }' 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.219 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 [2024-11-08 16:57:05.929415] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.481 [2024-11-08 16:57:05.929520] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.481 [2024-11-08 16:57:05.929660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.481 [2024-11-08 16:57:05.929764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.481 [2024-11-08 16:57:05.929817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.482 16:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.482 [2024-11-08 16:57:05.997301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.482 [2024-11-08 16:57:05.997400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.482 [2024-11-08 16:57:05.997437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:36.482 [2024-11-08 16:57:05.997448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.482 [2024-11-08 16:57:05.999970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.482 [2024-11-08 16:57:06.000067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.482 [2024-11-08 16:57:06.000167] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.482 [2024-11-08 16:57:06.000209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.482 pt2 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.482 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.745 "name": "raid_bdev1", 00:15:36.745 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:36.745 "strip_size_kb": 64, 00:15:36.745 "state": "configuring", 00:15:36.745 "raid_level": "raid5f", 00:15:36.745 "superblock": true, 00:15:36.745 "num_base_bdevs": 3, 00:15:36.745 "num_base_bdevs_discovered": 1, 00:15:36.745 "num_base_bdevs_operational": 2, 00:15:36.745 "base_bdevs_list": [ 00:15:36.745 { 00:15:36.745 "name": null, 00:15:36.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.745 "is_configured": false, 00:15:36.745 "data_offset": 2048, 00:15:36.745 "data_size": 63488 00:15:36.745 }, 00:15:36.745 { 00:15:36.745 "name": "pt2", 00:15:36.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.745 "is_configured": true, 00:15:36.745 "data_offset": 2048, 00:15:36.745 "data_size": 63488 00:15:36.745 }, 00:15:36.745 { 00:15:36.745 "name": null, 00:15:36.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.745 "is_configured": false, 00:15:36.745 "data_offset": 2048, 00:15:36.745 "data_size": 63488 00:15:36.745 } 00:15:36.745 ] 00:15:36.745 }' 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.745 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.003 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.004 [2024-11-08 16:57:06.436595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:37.004 [2024-11-08 16:57:06.436692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.004 [2024-11-08 16:57:06.436718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:37.004 [2024-11-08 16:57:06.436729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.004 [2024-11-08 16:57:06.437224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.004 [2024-11-08 16:57:06.437245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:37.004 [2024-11-08 16:57:06.437329] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:37.004 [2024-11-08 16:57:06.437360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:37.004 [2024-11-08 16:57:06.437467] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:37.004 [2024-11-08 16:57:06.437477] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.004 [2024-11-08 16:57:06.437847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:37.004 [2024-11-08 16:57:06.438512] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:37.004 [2024-11-08 16:57:06.438595] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:37.004 [2024-11-08 16:57:06.438937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.004 pt3 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.004 "name": "raid_bdev1", 00:15:37.004 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:37.004 "strip_size_kb": 64, 00:15:37.004 "state": "online", 00:15:37.004 "raid_level": "raid5f", 00:15:37.004 "superblock": true, 00:15:37.004 "num_base_bdevs": 3, 00:15:37.004 "num_base_bdevs_discovered": 2, 00:15:37.004 "num_base_bdevs_operational": 2, 00:15:37.004 "base_bdevs_list": [ 00:15:37.004 { 00:15:37.004 "name": null, 00:15:37.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.004 "is_configured": false, 00:15:37.004 "data_offset": 2048, 00:15:37.004 "data_size": 63488 00:15:37.004 }, 00:15:37.004 { 00:15:37.004 "name": "pt2", 00:15:37.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.004 "is_configured": true, 00:15:37.004 "data_offset": 2048, 00:15:37.004 "data_size": 63488 00:15:37.004 }, 00:15:37.004 { 00:15:37.004 "name": "pt3", 00:15:37.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.004 "is_configured": true, 00:15:37.004 "data_offset": 2048, 00:15:37.004 "data_size": 63488 00:15:37.004 } 00:15:37.004 ] 00:15:37.004 }' 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.004 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 [2024-11-08 16:57:06.903819] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.571 [2024-11-08 16:57:06.903864] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.571 [2024-11-08 16:57:06.903961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.571 [2024-11-08 16:57:06.904031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.571 [2024-11-08 16:57:06.904046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 [2024-11-08 16:57:06.975740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.571 [2024-11-08 16:57:06.975890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.571 [2024-11-08 16:57:06.975935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:37.571 [2024-11-08 16:57:06.975980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.571 [2024-11-08 16:57:06.978640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.571 [2024-11-08 16:57:06.978737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.571 [2024-11-08 16:57:06.978861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:37.571 [2024-11-08 16:57:06.978943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.571 [2024-11-08 16:57:06.979134] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:37.571 [2024-11-08 16:57:06.979222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.571 [2024-11-08 16:57:06.979311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:37.571 [2024-11-08 16:57:06.979407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.571 pt1 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 16:57:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.571 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.571 "name": "raid_bdev1", 00:15:37.571 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:37.571 "strip_size_kb": 64, 00:15:37.571 "state": "configuring", 00:15:37.571 "raid_level": "raid5f", 00:15:37.571 "superblock": true, 00:15:37.571 "num_base_bdevs": 3, 00:15:37.571 "num_base_bdevs_discovered": 1, 00:15:37.571 "num_base_bdevs_operational": 2, 00:15:37.571 "base_bdevs_list": [ 00:15:37.571 { 00:15:37.571 "name": null, 00:15:37.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.571 "is_configured": false, 00:15:37.571 "data_offset": 2048, 00:15:37.571 "data_size": 63488 00:15:37.571 }, 00:15:37.571 { 00:15:37.571 "name": "pt2", 00:15:37.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.571 "is_configured": true, 00:15:37.571 "data_offset": 2048, 00:15:37.571 "data_size": 63488 00:15:37.571 }, 00:15:37.571 { 00:15:37.571 "name": null, 00:15:37.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.571 "is_configured": false, 00:15:37.571 "data_offset": 2048, 00:15:37.571 "data_size": 63488 00:15:37.571 } 00:15:37.571 ] 00:15:37.571 }' 00:15:37.571 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.571 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.140 [2024-11-08 16:57:07.502864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.140 [2024-11-08 16:57:07.503018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.140 [2024-11-08 16:57:07.503046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:38.140 [2024-11-08 16:57:07.503061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.140 [2024-11-08 16:57:07.503593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.140 [2024-11-08 16:57:07.503622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.140 [2024-11-08 16:57:07.503728] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:38.140 [2024-11-08 16:57:07.503760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.140 [2024-11-08 16:57:07.503864] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:38.140 [2024-11-08 16:57:07.503878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.140 [2024-11-08 16:57:07.504175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:38.140 [2024-11-08 16:57:07.504841] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:38.140 [2024-11-08 16:57:07.504883] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:38.140 [2024-11-08 16:57:07.505094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.140 pt3 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.140 "name": "raid_bdev1", 00:15:38.140 "uuid": "85ebf1cd-7439-43a5-ae94-cecdf3dca970", 00:15:38.140 "strip_size_kb": 64, 00:15:38.140 "state": "online", 00:15:38.140 "raid_level": "raid5f", 00:15:38.140 "superblock": true, 00:15:38.140 "num_base_bdevs": 3, 00:15:38.140 "num_base_bdevs_discovered": 2, 00:15:38.140 "num_base_bdevs_operational": 2, 00:15:38.140 "base_bdevs_list": [ 00:15:38.140 { 00:15:38.140 "name": null, 00:15:38.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.140 "is_configured": false, 00:15:38.140 "data_offset": 2048, 00:15:38.140 "data_size": 63488 00:15:38.140 }, 00:15:38.140 { 00:15:38.140 "name": "pt2", 00:15:38.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.140 "is_configured": true, 00:15:38.140 "data_offset": 2048, 00:15:38.140 "data_size": 63488 00:15:38.140 }, 00:15:38.140 { 00:15:38.140 "name": "pt3", 00:15:38.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.140 "is_configured": true, 00:15:38.140 "data_offset": 2048, 00:15:38.140 "data_size": 63488 00:15:38.140 } 00:15:38.140 ] 00:15:38.140 }' 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.140 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:38.707 16:57:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.707 [2024-11-08 16:57:08.006354] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 85ebf1cd-7439-43a5-ae94-cecdf3dca970 '!=' 85ebf1cd-7439-43a5-ae94-cecdf3dca970 ']' 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91749 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91749 ']' 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91749 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91749 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91749' 00:15:38.707 killing process with pid 91749 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91749 00:15:38.707 [2024-11-08 16:57:08.079856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.707 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91749 00:15:38.707 [2024-11-08 16:57:08.080046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.707 [2024-11-08 16:57:08.080167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.707 [2024-11-08 16:57:08.080216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:38.707 [2024-11-08 16:57:08.115793] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.966 ************************************ 00:15:38.966 END TEST raid5f_superblock_test 00:15:38.966 ************************************ 00:15:38.966 16:57:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:38.966 00:15:38.966 real 0m6.841s 00:15:38.966 user 0m11.497s 00:15:38.966 sys 0m1.437s 00:15:38.966 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.966 16:57:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.966 16:57:08 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:38.966 16:57:08 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:38.966 16:57:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:38.966 16:57:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.966 16:57:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.966 ************************************ 00:15:38.966 START TEST raid5f_rebuild_test 00:15:38.966 ************************************ 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92180 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92180 00:15:38.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92180 ']' 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.966 16:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.225 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:39.225 Zero copy mechanism will not be used. 00:15:39.225 [2024-11-08 16:57:08.524228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:39.225 [2024-11-08 16:57:08.524390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92180 ] 00:15:39.225 [2024-11-08 16:57:08.691876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.225 [2024-11-08 16:57:08.747577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.484 [2024-11-08 16:57:08.792954] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.484 [2024-11-08 16:57:08.792994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.052 BaseBdev1_malloc 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.052 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.052 [2024-11-08 16:57:09.488721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:40.052 [2024-11-08 16:57:09.488874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.052 [2024-11-08 16:57:09.488913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:40.052 [2024-11-08 16:57:09.488933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.052 [2024-11-08 16:57:09.491559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.053 [2024-11-08 16:57:09.491605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.053 BaseBdev1 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.053 BaseBdev2_malloc 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.053 [2024-11-08 16:57:09.532170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:40.053 [2024-11-08 16:57:09.532267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.053 [2024-11-08 16:57:09.532303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:40.053 [2024-11-08 16:57:09.532320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.053 [2024-11-08 16:57:09.536012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.053 [2024-11-08 16:57:09.536061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.053 BaseBdev2 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.053 BaseBdev3_malloc 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.053 [2024-11-08 16:57:09.561812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:40.053 [2024-11-08 16:57:09.561943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.053 [2024-11-08 16:57:09.561984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:40.053 [2024-11-08 16:57:09.561995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.053 [2024-11-08 16:57:09.564871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.053 [2024-11-08 16:57:09.564933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:40.053 BaseBdev3 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.053 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 spare_malloc 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 spare_delay 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 [2024-11-08 16:57:09.603328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.312 [2024-11-08 16:57:09.603480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.312 [2024-11-08 16:57:09.603520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:40.312 [2024-11-08 16:57:09.603532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.312 [2024-11-08 16:57:09.606141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.312 [2024-11-08 16:57:09.606183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.312 spare 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 [2024-11-08 16:57:09.615392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.312 [2024-11-08 16:57:09.617561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.312 [2024-11-08 16:57:09.617726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.312 [2024-11-08 16:57:09.617837] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:40.312 [2024-11-08 16:57:09.617851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:40.312 [2024-11-08 16:57:09.618174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:40.312 [2024-11-08 16:57:09.618630] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:40.312 [2024-11-08 16:57:09.618643] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:40.312 [2024-11-08 16:57:09.618823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.312 "name": "raid_bdev1", 00:15:40.312 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:40.312 "strip_size_kb": 64, 00:15:40.312 "state": "online", 00:15:40.312 "raid_level": "raid5f", 00:15:40.312 "superblock": false, 00:15:40.312 "num_base_bdevs": 3, 00:15:40.312 "num_base_bdevs_discovered": 3, 00:15:40.312 "num_base_bdevs_operational": 3, 00:15:40.312 "base_bdevs_list": [ 00:15:40.312 { 00:15:40.312 "name": "BaseBdev1", 00:15:40.312 "uuid": "3a473b3a-9db9-57d9-b511-3c5d83d9a8ab", 00:15:40.312 "is_configured": true, 00:15:40.312 "data_offset": 0, 00:15:40.312 "data_size": 65536 00:15:40.312 }, 00:15:40.312 { 00:15:40.312 "name": "BaseBdev2", 00:15:40.312 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:40.312 "is_configured": true, 00:15:40.312 "data_offset": 0, 00:15:40.312 "data_size": 65536 00:15:40.312 }, 00:15:40.312 { 00:15:40.312 "name": "BaseBdev3", 00:15:40.312 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:40.312 "is_configured": true, 00:15:40.312 "data_offset": 0, 00:15:40.312 "data_size": 65536 00:15:40.312 } 00:15:40.312 ] 00:15:40.312 }' 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.312 16:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.570 [2024-11-08 16:57:10.063820] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.570 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.828 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:41.087 [2024-11-08 16:57:10.387381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:41.087 /dev/nbd0 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.087 1+0 records in 00:15:41.087 1+0 records out 00:15:41.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435261 s, 9.4 MB/s 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:41.087 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:41.346 512+0 records in 00:15:41.346 512+0 records out 00:15:41.346 67108864 bytes (67 MB, 64 MiB) copied, 0.363126 s, 185 MB/s 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.346 16:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.603 [2024-11-08 16:57:11.079177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.603 [2024-11-08 16:57:11.087397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.603 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.604 "name": "raid_bdev1", 00:15:41.604 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:41.604 "strip_size_kb": 64, 00:15:41.604 "state": "online", 00:15:41.604 "raid_level": "raid5f", 00:15:41.604 "superblock": false, 00:15:41.604 "num_base_bdevs": 3, 00:15:41.604 "num_base_bdevs_discovered": 2, 00:15:41.604 "num_base_bdevs_operational": 2, 00:15:41.604 "base_bdevs_list": [ 00:15:41.604 { 00:15:41.604 "name": null, 00:15:41.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.604 "is_configured": false, 00:15:41.604 "data_offset": 0, 00:15:41.604 "data_size": 65536 00:15:41.604 }, 00:15:41.604 { 00:15:41.604 "name": "BaseBdev2", 00:15:41.604 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:41.604 "is_configured": true, 00:15:41.604 "data_offset": 0, 00:15:41.604 "data_size": 65536 00:15:41.604 }, 00:15:41.604 { 00:15:41.604 "name": "BaseBdev3", 00:15:41.604 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:41.604 "is_configured": true, 00:15:41.604 "data_offset": 0, 00:15:41.604 "data_size": 65536 00:15:41.604 } 00:15:41.604 ] 00:15:41.604 }' 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.604 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.169 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.169 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.169 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.169 [2024-11-08 16:57:11.570481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.169 [2024-11-08 16:57:11.574799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:15:42.169 16:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.169 16:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:42.169 [2024-11-08 16:57:11.577456] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.104 "name": "raid_bdev1", 00:15:43.104 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:43.104 "strip_size_kb": 64, 00:15:43.104 "state": "online", 00:15:43.104 "raid_level": "raid5f", 00:15:43.104 "superblock": false, 00:15:43.104 "num_base_bdevs": 3, 00:15:43.104 "num_base_bdevs_discovered": 3, 00:15:43.104 "num_base_bdevs_operational": 3, 00:15:43.104 "process": { 00:15:43.104 "type": "rebuild", 00:15:43.104 "target": "spare", 00:15:43.104 "progress": { 00:15:43.104 "blocks": 20480, 00:15:43.104 "percent": 15 00:15:43.104 } 00:15:43.104 }, 00:15:43.104 "base_bdevs_list": [ 00:15:43.104 { 00:15:43.104 "name": "spare", 00:15:43.104 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:43.104 "is_configured": true, 00:15:43.104 "data_offset": 0, 00:15:43.104 "data_size": 65536 00:15:43.104 }, 00:15:43.104 { 00:15:43.104 "name": "BaseBdev2", 00:15:43.104 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:43.104 "is_configured": true, 00:15:43.104 "data_offset": 0, 00:15:43.104 "data_size": 65536 00:15:43.104 }, 00:15:43.104 { 00:15:43.104 "name": "BaseBdev3", 00:15:43.104 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:43.104 "is_configured": true, 00:15:43.104 "data_offset": 0, 00:15:43.104 "data_size": 65536 00:15:43.104 } 00:15:43.104 ] 00:15:43.104 }' 00:15:43.104 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.363 [2024-11-08 16:57:12.734306] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.363 [2024-11-08 16:57:12.790005] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.363 [2024-11-08 16:57:12.790140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.363 [2024-11-08 16:57:12.790164] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.363 [2024-11-08 16:57:12.790179] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.363 "name": "raid_bdev1", 00:15:43.363 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:43.363 "strip_size_kb": 64, 00:15:43.363 "state": "online", 00:15:43.363 "raid_level": "raid5f", 00:15:43.363 "superblock": false, 00:15:43.363 "num_base_bdevs": 3, 00:15:43.363 "num_base_bdevs_discovered": 2, 00:15:43.363 "num_base_bdevs_operational": 2, 00:15:43.363 "base_bdevs_list": [ 00:15:43.363 { 00:15:43.363 "name": null, 00:15:43.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.363 "is_configured": false, 00:15:43.363 "data_offset": 0, 00:15:43.363 "data_size": 65536 00:15:43.363 }, 00:15:43.363 { 00:15:43.363 "name": "BaseBdev2", 00:15:43.363 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:43.363 "is_configured": true, 00:15:43.363 "data_offset": 0, 00:15:43.363 "data_size": 65536 00:15:43.363 }, 00:15:43.363 { 00:15:43.363 "name": "BaseBdev3", 00:15:43.363 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:43.363 "is_configured": true, 00:15:43.363 "data_offset": 0, 00:15:43.363 "data_size": 65536 00:15:43.363 } 00:15:43.363 ] 00:15:43.363 }' 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.363 16:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.952 "name": "raid_bdev1", 00:15:43.952 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:43.952 "strip_size_kb": 64, 00:15:43.952 "state": "online", 00:15:43.952 "raid_level": "raid5f", 00:15:43.952 "superblock": false, 00:15:43.952 "num_base_bdevs": 3, 00:15:43.952 "num_base_bdevs_discovered": 2, 00:15:43.952 "num_base_bdevs_operational": 2, 00:15:43.952 "base_bdevs_list": [ 00:15:43.952 { 00:15:43.952 "name": null, 00:15:43.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.952 "is_configured": false, 00:15:43.952 "data_offset": 0, 00:15:43.952 "data_size": 65536 00:15:43.952 }, 00:15:43.952 { 00:15:43.952 "name": "BaseBdev2", 00:15:43.952 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:43.952 "is_configured": true, 00:15:43.952 "data_offset": 0, 00:15:43.952 "data_size": 65536 00:15:43.952 }, 00:15:43.952 { 00:15:43.952 "name": "BaseBdev3", 00:15:43.952 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:43.952 "is_configured": true, 00:15:43.952 "data_offset": 0, 00:15:43.952 "data_size": 65536 00:15:43.952 } 00:15:43.952 ] 00:15:43.952 }' 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.952 [2024-11-08 16:57:13.367721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.952 [2024-11-08 16:57:13.371772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.952 16:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.952 [2024-11-08 16:57:13.374330] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.888 16:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.147 "name": "raid_bdev1", 00:15:45.147 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:45.147 "strip_size_kb": 64, 00:15:45.147 "state": "online", 00:15:45.147 "raid_level": "raid5f", 00:15:45.147 "superblock": false, 00:15:45.147 "num_base_bdevs": 3, 00:15:45.147 "num_base_bdevs_discovered": 3, 00:15:45.147 "num_base_bdevs_operational": 3, 00:15:45.147 "process": { 00:15:45.147 "type": "rebuild", 00:15:45.147 "target": "spare", 00:15:45.147 "progress": { 00:15:45.147 "blocks": 20480, 00:15:45.147 "percent": 15 00:15:45.147 } 00:15:45.147 }, 00:15:45.147 "base_bdevs_list": [ 00:15:45.147 { 00:15:45.147 "name": "spare", 00:15:45.147 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:45.147 "is_configured": true, 00:15:45.147 "data_offset": 0, 00:15:45.147 "data_size": 65536 00:15:45.147 }, 00:15:45.147 { 00:15:45.147 "name": "BaseBdev2", 00:15:45.147 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:45.147 "is_configured": true, 00:15:45.147 "data_offset": 0, 00:15:45.147 "data_size": 65536 00:15:45.147 }, 00:15:45.147 { 00:15:45.147 "name": "BaseBdev3", 00:15:45.147 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:45.147 "is_configured": true, 00:15:45.147 "data_offset": 0, 00:15:45.147 "data_size": 65536 00:15:45.147 } 00:15:45.147 ] 00:15:45.147 }' 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.147 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.147 "name": "raid_bdev1", 00:15:45.148 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:45.148 "strip_size_kb": 64, 00:15:45.148 "state": "online", 00:15:45.148 "raid_level": "raid5f", 00:15:45.148 "superblock": false, 00:15:45.148 "num_base_bdevs": 3, 00:15:45.148 "num_base_bdevs_discovered": 3, 00:15:45.148 "num_base_bdevs_operational": 3, 00:15:45.148 "process": { 00:15:45.148 "type": "rebuild", 00:15:45.148 "target": "spare", 00:15:45.148 "progress": { 00:15:45.148 "blocks": 22528, 00:15:45.148 "percent": 17 00:15:45.148 } 00:15:45.148 }, 00:15:45.148 "base_bdevs_list": [ 00:15:45.148 { 00:15:45.148 "name": "spare", 00:15:45.148 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:45.148 "is_configured": true, 00:15:45.148 "data_offset": 0, 00:15:45.148 "data_size": 65536 00:15:45.148 }, 00:15:45.148 { 00:15:45.148 "name": "BaseBdev2", 00:15:45.148 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:45.148 "is_configured": true, 00:15:45.148 "data_offset": 0, 00:15:45.148 "data_size": 65536 00:15:45.148 }, 00:15:45.148 { 00:15:45.148 "name": "BaseBdev3", 00:15:45.148 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:45.148 "is_configured": true, 00:15:45.148 "data_offset": 0, 00:15:45.148 "data_size": 65536 00:15:45.148 } 00:15:45.148 ] 00:15:45.148 }' 00:15:45.148 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.148 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.148 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.148 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.148 16:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.526 "name": "raid_bdev1", 00:15:46.526 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:46.526 "strip_size_kb": 64, 00:15:46.526 "state": "online", 00:15:46.526 "raid_level": "raid5f", 00:15:46.526 "superblock": false, 00:15:46.526 "num_base_bdevs": 3, 00:15:46.526 "num_base_bdevs_discovered": 3, 00:15:46.526 "num_base_bdevs_operational": 3, 00:15:46.526 "process": { 00:15:46.526 "type": "rebuild", 00:15:46.526 "target": "spare", 00:15:46.526 "progress": { 00:15:46.526 "blocks": 45056, 00:15:46.526 "percent": 34 00:15:46.526 } 00:15:46.526 }, 00:15:46.526 "base_bdevs_list": [ 00:15:46.526 { 00:15:46.526 "name": "spare", 00:15:46.526 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:46.526 "is_configured": true, 00:15:46.526 "data_offset": 0, 00:15:46.526 "data_size": 65536 00:15:46.526 }, 00:15:46.526 { 00:15:46.526 "name": "BaseBdev2", 00:15:46.526 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:46.526 "is_configured": true, 00:15:46.526 "data_offset": 0, 00:15:46.526 "data_size": 65536 00:15:46.526 }, 00:15:46.526 { 00:15:46.526 "name": "BaseBdev3", 00:15:46.526 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:46.526 "is_configured": true, 00:15:46.526 "data_offset": 0, 00:15:46.526 "data_size": 65536 00:15:46.526 } 00:15:46.526 ] 00:15:46.526 }' 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.526 16:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.466 "name": "raid_bdev1", 00:15:47.466 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:47.466 "strip_size_kb": 64, 00:15:47.466 "state": "online", 00:15:47.466 "raid_level": "raid5f", 00:15:47.466 "superblock": false, 00:15:47.466 "num_base_bdevs": 3, 00:15:47.466 "num_base_bdevs_discovered": 3, 00:15:47.466 "num_base_bdevs_operational": 3, 00:15:47.466 "process": { 00:15:47.466 "type": "rebuild", 00:15:47.466 "target": "spare", 00:15:47.466 "progress": { 00:15:47.466 "blocks": 67584, 00:15:47.466 "percent": 51 00:15:47.466 } 00:15:47.466 }, 00:15:47.466 "base_bdevs_list": [ 00:15:47.466 { 00:15:47.466 "name": "spare", 00:15:47.466 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:47.466 "is_configured": true, 00:15:47.466 "data_offset": 0, 00:15:47.466 "data_size": 65536 00:15:47.466 }, 00:15:47.466 { 00:15:47.466 "name": "BaseBdev2", 00:15:47.466 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:47.466 "is_configured": true, 00:15:47.466 "data_offset": 0, 00:15:47.466 "data_size": 65536 00:15:47.466 }, 00:15:47.466 { 00:15:47.466 "name": "BaseBdev3", 00:15:47.466 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:47.466 "is_configured": true, 00:15:47.466 "data_offset": 0, 00:15:47.466 "data_size": 65536 00:15:47.466 } 00:15:47.466 ] 00:15:47.466 }' 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.466 16:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.408 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.667 16:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.667 16:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.667 16:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.667 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.667 "name": "raid_bdev1", 00:15:48.667 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:48.667 "strip_size_kb": 64, 00:15:48.667 "state": "online", 00:15:48.667 "raid_level": "raid5f", 00:15:48.667 "superblock": false, 00:15:48.667 "num_base_bdevs": 3, 00:15:48.667 "num_base_bdevs_discovered": 3, 00:15:48.667 "num_base_bdevs_operational": 3, 00:15:48.667 "process": { 00:15:48.667 "type": "rebuild", 00:15:48.667 "target": "spare", 00:15:48.667 "progress": { 00:15:48.667 "blocks": 92160, 00:15:48.667 "percent": 70 00:15:48.667 } 00:15:48.667 }, 00:15:48.667 "base_bdevs_list": [ 00:15:48.667 { 00:15:48.667 "name": "spare", 00:15:48.667 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:48.667 "is_configured": true, 00:15:48.667 "data_offset": 0, 00:15:48.667 "data_size": 65536 00:15:48.667 }, 00:15:48.667 { 00:15:48.667 "name": "BaseBdev2", 00:15:48.667 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:48.667 "is_configured": true, 00:15:48.667 "data_offset": 0, 00:15:48.667 "data_size": 65536 00:15:48.667 }, 00:15:48.667 { 00:15:48.667 "name": "BaseBdev3", 00:15:48.667 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:48.667 "is_configured": true, 00:15:48.667 "data_offset": 0, 00:15:48.667 "data_size": 65536 00:15:48.667 } 00:15:48.667 ] 00:15:48.667 }' 00:15:48.667 16:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.667 16:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.667 16:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.667 16:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.667 16:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.601 16:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.602 16:57:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.860 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.860 "name": "raid_bdev1", 00:15:49.860 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:49.860 "strip_size_kb": 64, 00:15:49.860 "state": "online", 00:15:49.860 "raid_level": "raid5f", 00:15:49.860 "superblock": false, 00:15:49.860 "num_base_bdevs": 3, 00:15:49.860 "num_base_bdevs_discovered": 3, 00:15:49.860 "num_base_bdevs_operational": 3, 00:15:49.860 "process": { 00:15:49.860 "type": "rebuild", 00:15:49.860 "target": "spare", 00:15:49.860 "progress": { 00:15:49.860 "blocks": 114688, 00:15:49.860 "percent": 87 00:15:49.860 } 00:15:49.860 }, 00:15:49.860 "base_bdevs_list": [ 00:15:49.860 { 00:15:49.860 "name": "spare", 00:15:49.860 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:49.860 "is_configured": true, 00:15:49.860 "data_offset": 0, 00:15:49.860 "data_size": 65536 00:15:49.860 }, 00:15:49.860 { 00:15:49.860 "name": "BaseBdev2", 00:15:49.860 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:49.860 "is_configured": true, 00:15:49.860 "data_offset": 0, 00:15:49.860 "data_size": 65536 00:15:49.860 }, 00:15:49.860 { 00:15:49.860 "name": "BaseBdev3", 00:15:49.860 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:49.860 "is_configured": true, 00:15:49.860 "data_offset": 0, 00:15:49.860 "data_size": 65536 00:15:49.860 } 00:15:49.860 ] 00:15:49.860 }' 00:15:49.860 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.860 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.860 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.860 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.860 16:57:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.428 [2024-11-08 16:57:19.838155] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:50.428 [2024-11-08 16:57:19.838261] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:50.428 [2024-11-08 16:57:19.838316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.994 "name": "raid_bdev1", 00:15:50.994 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:50.994 "strip_size_kb": 64, 00:15:50.994 "state": "online", 00:15:50.994 "raid_level": "raid5f", 00:15:50.994 "superblock": false, 00:15:50.994 "num_base_bdevs": 3, 00:15:50.994 "num_base_bdevs_discovered": 3, 00:15:50.994 "num_base_bdevs_operational": 3, 00:15:50.994 "base_bdevs_list": [ 00:15:50.994 { 00:15:50.994 "name": "spare", 00:15:50.994 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:50.994 "is_configured": true, 00:15:50.994 "data_offset": 0, 00:15:50.994 "data_size": 65536 00:15:50.994 }, 00:15:50.994 { 00:15:50.994 "name": "BaseBdev2", 00:15:50.994 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:50.994 "is_configured": true, 00:15:50.994 "data_offset": 0, 00:15:50.994 "data_size": 65536 00:15:50.994 }, 00:15:50.994 { 00:15:50.994 "name": "BaseBdev3", 00:15:50.994 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:50.994 "is_configured": true, 00:15:50.994 "data_offset": 0, 00:15:50.994 "data_size": 65536 00:15:50.994 } 00:15:50.994 ] 00:15:50.994 }' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.994 "name": "raid_bdev1", 00:15:50.994 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:50.994 "strip_size_kb": 64, 00:15:50.994 "state": "online", 00:15:50.994 "raid_level": "raid5f", 00:15:50.994 "superblock": false, 00:15:50.994 "num_base_bdevs": 3, 00:15:50.994 "num_base_bdevs_discovered": 3, 00:15:50.994 "num_base_bdevs_operational": 3, 00:15:50.994 "base_bdevs_list": [ 00:15:50.994 { 00:15:50.994 "name": "spare", 00:15:50.994 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:50.994 "is_configured": true, 00:15:50.994 "data_offset": 0, 00:15:50.994 "data_size": 65536 00:15:50.994 }, 00:15:50.994 { 00:15:50.994 "name": "BaseBdev2", 00:15:50.994 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:50.994 "is_configured": true, 00:15:50.994 "data_offset": 0, 00:15:50.994 "data_size": 65536 00:15:50.994 }, 00:15:50.994 { 00:15:50.994 "name": "BaseBdev3", 00:15:50.994 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:50.994 "is_configured": true, 00:15:50.994 "data_offset": 0, 00:15:50.994 "data_size": 65536 00:15:50.994 } 00:15:50.994 ] 00:15:50.994 }' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.994 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.253 "name": "raid_bdev1", 00:15:51.253 "uuid": "507f75cd-136e-472c-8800-b49bb22ea835", 00:15:51.253 "strip_size_kb": 64, 00:15:51.253 "state": "online", 00:15:51.253 "raid_level": "raid5f", 00:15:51.253 "superblock": false, 00:15:51.253 "num_base_bdevs": 3, 00:15:51.253 "num_base_bdevs_discovered": 3, 00:15:51.253 "num_base_bdevs_operational": 3, 00:15:51.253 "base_bdevs_list": [ 00:15:51.253 { 00:15:51.253 "name": "spare", 00:15:51.253 "uuid": "0c3bbcdd-c580-524b-86cc-d618d39ab211", 00:15:51.253 "is_configured": true, 00:15:51.253 "data_offset": 0, 00:15:51.253 "data_size": 65536 00:15:51.253 }, 00:15:51.253 { 00:15:51.253 "name": "BaseBdev2", 00:15:51.253 "uuid": "73ad5bad-6864-5041-a1a2-75cdc7512534", 00:15:51.253 "is_configured": true, 00:15:51.253 "data_offset": 0, 00:15:51.253 "data_size": 65536 00:15:51.253 }, 00:15:51.253 { 00:15:51.253 "name": "BaseBdev3", 00:15:51.253 "uuid": "3b0f4203-7457-51ea-92f1-716d08634e94", 00:15:51.253 "is_configured": true, 00:15:51.253 "data_offset": 0, 00:15:51.253 "data_size": 65536 00:15:51.253 } 00:15:51.253 ] 00:15:51.253 }' 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.253 16:57:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.511 [2024-11-08 16:57:21.022194] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.511 [2024-11-08 16:57:21.022246] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.511 [2024-11-08 16:57:21.022397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.511 [2024-11-08 16:57:21.022512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.511 [2024-11-08 16:57:21.022529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.511 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.770 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:52.028 /dev/nbd0 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.028 1+0 records in 00:15:52.028 1+0 records out 00:15:52.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438442 s, 9.3 MB/s 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.028 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:52.296 /dev/nbd1 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.296 1+0 records in 00:15:52.296 1+0 records out 00:15:52.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550585 s, 7.4 MB/s 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.296 16:57:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.554 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.555 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92180 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92180 ']' 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92180 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.813 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92180 00:15:53.070 killing process with pid 92180 00:15:53.070 Received shutdown signal, test time was about 60.000000 seconds 00:15:53.070 00:15:53.070 Latency(us) 00:15:53.070 [2024-11-08T16:57:22.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.070 [2024-11-08T16:57:22.598Z] =================================================================================================================== 00:15:53.070 [2024-11-08T16:57:22.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:53.071 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.071 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.071 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92180' 00:15:53.071 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92180 00:15:53.071 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92180 00:15:53.071 [2024-11-08 16:57:22.351795] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.071 [2024-11-08 16:57:22.396234] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.328 16:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.328 00:15:53.328 real 0m14.219s 00:15:53.328 user 0m18.024s 00:15:53.328 sys 0m2.062s 00:15:53.328 ************************************ 00:15:53.328 END TEST raid5f_rebuild_test 00:15:53.328 ************************************ 00:15:53.328 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 16:57:22 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:53.329 16:57:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:53.329 16:57:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.329 16:57:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 ************************************ 00:15:53.329 START TEST raid5f_rebuild_test_sb 00:15:53.329 ************************************ 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92605 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92605 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92605 ']' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.329 16:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.329 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.329 Zero copy mechanism will not be used. 00:15:53.329 [2024-11-08 16:57:22.808566] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:53.329 [2024-11-08 16:57:22.808751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92605 ] 00:15:53.586 [2024-11-08 16:57:22.961243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.586 [2024-11-08 16:57:23.038426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.586 [2024-11-08 16:57:23.087390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.586 [2024-11-08 16:57:23.087441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 BaseBdev1_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 [2024-11-08 16:57:23.780481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.527 [2024-11-08 16:57:23.780598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.527 [2024-11-08 16:57:23.780639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.527 [2024-11-08 16:57:23.780670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.527 [2024-11-08 16:57:23.783328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.527 [2024-11-08 16:57:23.783384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.527 BaseBdev1 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 BaseBdev2_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 [2024-11-08 16:57:23.827095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:54.527 [2024-11-08 16:57:23.827181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.527 [2024-11-08 16:57:23.827222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.527 [2024-11-08 16:57:23.827236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.527 [2024-11-08 16:57:23.830135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.527 [2024-11-08 16:57:23.830186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.527 BaseBdev2 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 BaseBdev3_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 [2024-11-08 16:57:23.856716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.527 [2024-11-08 16:57:23.856788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.527 [2024-11-08 16:57:23.856820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.527 [2024-11-08 16:57:23.856830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.527 [2024-11-08 16:57:23.859349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.527 [2024-11-08 16:57:23.859396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.527 BaseBdev3 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 spare_malloc 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 spare_delay 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 [2024-11-08 16:57:23.898273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.527 [2024-11-08 16:57:23.898359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.527 [2024-11-08 16:57:23.898395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:54.527 [2024-11-08 16:57:23.898407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.527 [2024-11-08 16:57:23.901054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.527 [2024-11-08 16:57:23.901103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.527 spare 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.527 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.527 [2024-11-08 16:57:23.910370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.527 [2024-11-08 16:57:23.912663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.527 [2024-11-08 16:57:23.912749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.527 [2024-11-08 16:57:23.912966] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:54.528 [2024-11-08 16:57:23.912997] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:54.528 [2024-11-08 16:57:23.913351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:54.528 [2024-11-08 16:57:23.913873] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:54.528 [2024-11-08 16:57:23.913895] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:54.528 [2024-11-08 16:57:23.914089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.528 "name": "raid_bdev1", 00:15:54.528 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:54.528 "strip_size_kb": 64, 00:15:54.528 "state": "online", 00:15:54.528 "raid_level": "raid5f", 00:15:54.528 "superblock": true, 00:15:54.528 "num_base_bdevs": 3, 00:15:54.528 "num_base_bdevs_discovered": 3, 00:15:54.528 "num_base_bdevs_operational": 3, 00:15:54.528 "base_bdevs_list": [ 00:15:54.528 { 00:15:54.528 "name": "BaseBdev1", 00:15:54.528 "uuid": "c444afb9-7b5c-5ab2-b662-5bf7b44a644b", 00:15:54.528 "is_configured": true, 00:15:54.528 "data_offset": 2048, 00:15:54.528 "data_size": 63488 00:15:54.528 }, 00:15:54.528 { 00:15:54.528 "name": "BaseBdev2", 00:15:54.528 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:54.528 "is_configured": true, 00:15:54.528 "data_offset": 2048, 00:15:54.528 "data_size": 63488 00:15:54.528 }, 00:15:54.528 { 00:15:54.528 "name": "BaseBdev3", 00:15:54.528 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:54.528 "is_configured": true, 00:15:54.528 "data_offset": 2048, 00:15:54.528 "data_size": 63488 00:15:54.528 } 00:15:54.528 ] 00:15:54.528 }' 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.528 16:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.097 [2024-11-08 16:57:24.369797] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.097 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.098 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:55.358 [2024-11-08 16:57:24.701090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.358 /dev/nbd0 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.358 1+0 records in 00:15:55.358 1+0 records out 00:15:55.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506779 s, 8.1 MB/s 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:55.358 16:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:55.928 496+0 records in 00:15:55.928 496+0 records out 00:15:55.928 65011712 bytes (65 MB, 62 MiB) copied, 0.419437 s, 155 MB/s 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.928 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.188 [2024-11-08 16:57:25.472415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.188 [2024-11-08 16:57:25.492537] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.188 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.188 "name": "raid_bdev1", 00:15:56.188 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:56.188 "strip_size_kb": 64, 00:15:56.188 "state": "online", 00:15:56.189 "raid_level": "raid5f", 00:15:56.189 "superblock": true, 00:15:56.189 "num_base_bdevs": 3, 00:15:56.189 "num_base_bdevs_discovered": 2, 00:15:56.189 "num_base_bdevs_operational": 2, 00:15:56.189 "base_bdevs_list": [ 00:15:56.189 { 00:15:56.189 "name": null, 00:15:56.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.189 "is_configured": false, 00:15:56.189 "data_offset": 0, 00:15:56.189 "data_size": 63488 00:15:56.189 }, 00:15:56.189 { 00:15:56.189 "name": "BaseBdev2", 00:15:56.189 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:56.189 "is_configured": true, 00:15:56.189 "data_offset": 2048, 00:15:56.189 "data_size": 63488 00:15:56.189 }, 00:15:56.189 { 00:15:56.189 "name": "BaseBdev3", 00:15:56.189 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:56.189 "is_configured": true, 00:15:56.189 "data_offset": 2048, 00:15:56.189 "data_size": 63488 00:15:56.189 } 00:15:56.189 ] 00:15:56.189 }' 00:15:56.189 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.189 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.448 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:56.448 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.448 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.448 [2024-11-08 16:57:25.939866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:56.448 [2024-11-08 16:57:25.944228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:15:56.448 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.448 16:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:56.448 [2024-11-08 16:57:25.947079] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.830 16:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.830 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.830 "name": "raid_bdev1", 00:15:57.830 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:57.830 "strip_size_kb": 64, 00:15:57.830 "state": "online", 00:15:57.830 "raid_level": "raid5f", 00:15:57.830 "superblock": true, 00:15:57.830 "num_base_bdevs": 3, 00:15:57.830 "num_base_bdevs_discovered": 3, 00:15:57.830 "num_base_bdevs_operational": 3, 00:15:57.830 "process": { 00:15:57.830 "type": "rebuild", 00:15:57.830 "target": "spare", 00:15:57.830 "progress": { 00:15:57.830 "blocks": 20480, 00:15:57.830 "percent": 16 00:15:57.830 } 00:15:57.830 }, 00:15:57.830 "base_bdevs_list": [ 00:15:57.830 { 00:15:57.830 "name": "spare", 00:15:57.830 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:15:57.830 "is_configured": true, 00:15:57.830 "data_offset": 2048, 00:15:57.830 "data_size": 63488 00:15:57.830 }, 00:15:57.830 { 00:15:57.830 "name": "BaseBdev2", 00:15:57.830 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:57.830 "is_configured": true, 00:15:57.830 "data_offset": 2048, 00:15:57.830 "data_size": 63488 00:15:57.830 }, 00:15:57.830 { 00:15:57.830 "name": "BaseBdev3", 00:15:57.830 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:57.830 "is_configured": true, 00:15:57.830 "data_offset": 2048, 00:15:57.830 "data_size": 63488 00:15:57.830 } 00:15:57.830 ] 00:15:57.830 }' 00:15:57.830 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.830 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.830 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.830 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.831 [2024-11-08 16:57:27.116007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.831 [2024-11-08 16:57:27.159727] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.831 [2024-11-08 16:57:27.159866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.831 [2024-11-08 16:57:27.159891] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.831 [2024-11-08 16:57:27.159911] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.831 "name": "raid_bdev1", 00:15:57.831 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:57.831 "strip_size_kb": 64, 00:15:57.831 "state": "online", 00:15:57.831 "raid_level": "raid5f", 00:15:57.831 "superblock": true, 00:15:57.831 "num_base_bdevs": 3, 00:15:57.831 "num_base_bdevs_discovered": 2, 00:15:57.831 "num_base_bdevs_operational": 2, 00:15:57.831 "base_bdevs_list": [ 00:15:57.831 { 00:15:57.831 "name": null, 00:15:57.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.831 "is_configured": false, 00:15:57.831 "data_offset": 0, 00:15:57.831 "data_size": 63488 00:15:57.831 }, 00:15:57.831 { 00:15:57.831 "name": "BaseBdev2", 00:15:57.831 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:57.831 "is_configured": true, 00:15:57.831 "data_offset": 2048, 00:15:57.831 "data_size": 63488 00:15:57.831 }, 00:15:57.831 { 00:15:57.831 "name": "BaseBdev3", 00:15:57.831 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:57.831 "is_configured": true, 00:15:57.831 "data_offset": 2048, 00:15:57.831 "data_size": 63488 00:15:57.831 } 00:15:57.831 ] 00:15:57.831 }' 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.831 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.401 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.401 "name": "raid_bdev1", 00:15:58.401 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:58.401 "strip_size_kb": 64, 00:15:58.401 "state": "online", 00:15:58.401 "raid_level": "raid5f", 00:15:58.401 "superblock": true, 00:15:58.401 "num_base_bdevs": 3, 00:15:58.402 "num_base_bdevs_discovered": 2, 00:15:58.402 "num_base_bdevs_operational": 2, 00:15:58.402 "base_bdevs_list": [ 00:15:58.402 { 00:15:58.402 "name": null, 00:15:58.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.402 "is_configured": false, 00:15:58.402 "data_offset": 0, 00:15:58.402 "data_size": 63488 00:15:58.402 }, 00:15:58.402 { 00:15:58.402 "name": "BaseBdev2", 00:15:58.402 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:58.402 "is_configured": true, 00:15:58.402 "data_offset": 2048, 00:15:58.402 "data_size": 63488 00:15:58.402 }, 00:15:58.402 { 00:15:58.402 "name": "BaseBdev3", 00:15:58.402 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:58.402 "is_configured": true, 00:15:58.402 "data_offset": 2048, 00:15:58.402 "data_size": 63488 00:15:58.402 } 00:15:58.402 ] 00:15:58.402 }' 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.402 [2024-11-08 16:57:27.777245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.402 [2024-11-08 16:57:27.781376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.402 16:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:58.402 [2024-11-08 16:57:27.783997] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.371 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.372 "name": "raid_bdev1", 00:15:59.372 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:59.372 "strip_size_kb": 64, 00:15:59.372 "state": "online", 00:15:59.372 "raid_level": "raid5f", 00:15:59.372 "superblock": true, 00:15:59.372 "num_base_bdevs": 3, 00:15:59.372 "num_base_bdevs_discovered": 3, 00:15:59.372 "num_base_bdevs_operational": 3, 00:15:59.372 "process": { 00:15:59.372 "type": "rebuild", 00:15:59.372 "target": "spare", 00:15:59.372 "progress": { 00:15:59.372 "blocks": 20480, 00:15:59.372 "percent": 16 00:15:59.372 } 00:15:59.372 }, 00:15:59.372 "base_bdevs_list": [ 00:15:59.372 { 00:15:59.372 "name": "spare", 00:15:59.372 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:15:59.372 "is_configured": true, 00:15:59.372 "data_offset": 2048, 00:15:59.372 "data_size": 63488 00:15:59.372 }, 00:15:59.372 { 00:15:59.372 "name": "BaseBdev2", 00:15:59.372 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:59.372 "is_configured": true, 00:15:59.372 "data_offset": 2048, 00:15:59.372 "data_size": 63488 00:15:59.372 }, 00:15:59.372 { 00:15:59.372 "name": "BaseBdev3", 00:15:59.372 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:59.372 "is_configured": true, 00:15:59.372 "data_offset": 2048, 00:15:59.372 "data_size": 63488 00:15:59.372 } 00:15:59.372 ] 00:15:59.372 }' 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.372 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:59.631 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=473 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.631 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.632 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.632 16:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.632 16:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.632 "name": "raid_bdev1", 00:15:59.632 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:15:59.632 "strip_size_kb": 64, 00:15:59.632 "state": "online", 00:15:59.632 "raid_level": "raid5f", 00:15:59.632 "superblock": true, 00:15:59.632 "num_base_bdevs": 3, 00:15:59.632 "num_base_bdevs_discovered": 3, 00:15:59.632 "num_base_bdevs_operational": 3, 00:15:59.632 "process": { 00:15:59.632 "type": "rebuild", 00:15:59.632 "target": "spare", 00:15:59.632 "progress": { 00:15:59.632 "blocks": 22528, 00:15:59.632 "percent": 17 00:15:59.632 } 00:15:59.632 }, 00:15:59.632 "base_bdevs_list": [ 00:15:59.632 { 00:15:59.632 "name": "spare", 00:15:59.632 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:15:59.632 "is_configured": true, 00:15:59.632 "data_offset": 2048, 00:15:59.632 "data_size": 63488 00:15:59.632 }, 00:15:59.632 { 00:15:59.632 "name": "BaseBdev2", 00:15:59.632 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:15:59.632 "is_configured": true, 00:15:59.632 "data_offset": 2048, 00:15:59.632 "data_size": 63488 00:15:59.632 }, 00:15:59.632 { 00:15:59.632 "name": "BaseBdev3", 00:15:59.632 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:15:59.632 "is_configured": true, 00:15:59.632 "data_offset": 2048, 00:15:59.632 "data_size": 63488 00:15:59.632 } 00:15:59.632 ] 00:15:59.632 }' 00:15:59.632 16:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.632 16:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.632 16:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.632 16:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.632 16:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.012 "name": "raid_bdev1", 00:16:01.012 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:01.012 "strip_size_kb": 64, 00:16:01.012 "state": "online", 00:16:01.012 "raid_level": "raid5f", 00:16:01.012 "superblock": true, 00:16:01.012 "num_base_bdevs": 3, 00:16:01.012 "num_base_bdevs_discovered": 3, 00:16:01.012 "num_base_bdevs_operational": 3, 00:16:01.012 "process": { 00:16:01.012 "type": "rebuild", 00:16:01.012 "target": "spare", 00:16:01.012 "progress": { 00:16:01.012 "blocks": 47104, 00:16:01.012 "percent": 37 00:16:01.012 } 00:16:01.012 }, 00:16:01.012 "base_bdevs_list": [ 00:16:01.012 { 00:16:01.012 "name": "spare", 00:16:01.012 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:01.012 "is_configured": true, 00:16:01.012 "data_offset": 2048, 00:16:01.012 "data_size": 63488 00:16:01.012 }, 00:16:01.012 { 00:16:01.012 "name": "BaseBdev2", 00:16:01.012 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:01.012 "is_configured": true, 00:16:01.012 "data_offset": 2048, 00:16:01.012 "data_size": 63488 00:16:01.012 }, 00:16:01.012 { 00:16:01.012 "name": "BaseBdev3", 00:16:01.012 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:01.012 "is_configured": true, 00:16:01.012 "data_offset": 2048, 00:16:01.012 "data_size": 63488 00:16:01.012 } 00:16:01.012 ] 00:16:01.012 }' 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.012 16:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.951 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.951 "name": "raid_bdev1", 00:16:01.951 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:01.951 "strip_size_kb": 64, 00:16:01.951 "state": "online", 00:16:01.951 "raid_level": "raid5f", 00:16:01.951 "superblock": true, 00:16:01.951 "num_base_bdevs": 3, 00:16:01.951 "num_base_bdevs_discovered": 3, 00:16:01.951 "num_base_bdevs_operational": 3, 00:16:01.951 "process": { 00:16:01.951 "type": "rebuild", 00:16:01.951 "target": "spare", 00:16:01.951 "progress": { 00:16:01.951 "blocks": 69632, 00:16:01.951 "percent": 54 00:16:01.951 } 00:16:01.951 }, 00:16:01.951 "base_bdevs_list": [ 00:16:01.951 { 00:16:01.951 "name": "spare", 00:16:01.951 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:01.952 "is_configured": true, 00:16:01.952 "data_offset": 2048, 00:16:01.952 "data_size": 63488 00:16:01.952 }, 00:16:01.952 { 00:16:01.952 "name": "BaseBdev2", 00:16:01.952 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:01.952 "is_configured": true, 00:16:01.952 "data_offset": 2048, 00:16:01.952 "data_size": 63488 00:16:01.952 }, 00:16:01.952 { 00:16:01.952 "name": "BaseBdev3", 00:16:01.952 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:01.952 "is_configured": true, 00:16:01.952 "data_offset": 2048, 00:16:01.952 "data_size": 63488 00:16:01.952 } 00:16:01.952 ] 00:16:01.952 }' 00:16:01.952 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.952 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.952 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.952 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.952 16:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.890 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.890 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.890 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.890 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.890 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.890 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.149 "name": "raid_bdev1", 00:16:03.149 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:03.149 "strip_size_kb": 64, 00:16:03.149 "state": "online", 00:16:03.149 "raid_level": "raid5f", 00:16:03.149 "superblock": true, 00:16:03.149 "num_base_bdevs": 3, 00:16:03.149 "num_base_bdevs_discovered": 3, 00:16:03.149 "num_base_bdevs_operational": 3, 00:16:03.149 "process": { 00:16:03.149 "type": "rebuild", 00:16:03.149 "target": "spare", 00:16:03.149 "progress": { 00:16:03.149 "blocks": 92160, 00:16:03.149 "percent": 72 00:16:03.149 } 00:16:03.149 }, 00:16:03.149 "base_bdevs_list": [ 00:16:03.149 { 00:16:03.149 "name": "spare", 00:16:03.149 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:03.149 "is_configured": true, 00:16:03.149 "data_offset": 2048, 00:16:03.149 "data_size": 63488 00:16:03.149 }, 00:16:03.149 { 00:16:03.149 "name": "BaseBdev2", 00:16:03.149 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:03.149 "is_configured": true, 00:16:03.149 "data_offset": 2048, 00:16:03.149 "data_size": 63488 00:16:03.149 }, 00:16:03.149 { 00:16:03.149 "name": "BaseBdev3", 00:16:03.149 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:03.149 "is_configured": true, 00:16:03.149 "data_offset": 2048, 00:16:03.149 "data_size": 63488 00:16:03.149 } 00:16:03.149 ] 00:16:03.149 }' 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.149 16:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.086 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.087 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.087 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.087 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.345 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.345 "name": "raid_bdev1", 00:16:04.345 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:04.345 "strip_size_kb": 64, 00:16:04.345 "state": "online", 00:16:04.345 "raid_level": "raid5f", 00:16:04.345 "superblock": true, 00:16:04.345 "num_base_bdevs": 3, 00:16:04.345 "num_base_bdevs_discovered": 3, 00:16:04.345 "num_base_bdevs_operational": 3, 00:16:04.345 "process": { 00:16:04.345 "type": "rebuild", 00:16:04.345 "target": "spare", 00:16:04.345 "progress": { 00:16:04.345 "blocks": 116736, 00:16:04.345 "percent": 91 00:16:04.345 } 00:16:04.345 }, 00:16:04.345 "base_bdevs_list": [ 00:16:04.345 { 00:16:04.345 "name": "spare", 00:16:04.345 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:04.345 "is_configured": true, 00:16:04.345 "data_offset": 2048, 00:16:04.345 "data_size": 63488 00:16:04.345 }, 00:16:04.345 { 00:16:04.345 "name": "BaseBdev2", 00:16:04.345 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:04.345 "is_configured": true, 00:16:04.345 "data_offset": 2048, 00:16:04.345 "data_size": 63488 00:16:04.345 }, 00:16:04.345 { 00:16:04.345 "name": "BaseBdev3", 00:16:04.345 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:04.345 "is_configured": true, 00:16:04.345 "data_offset": 2048, 00:16:04.345 "data_size": 63488 00:16:04.345 } 00:16:04.345 ] 00:16:04.345 }' 00:16:04.345 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.345 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.345 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.345 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.345 16:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.604 [2024-11-08 16:57:34.053139] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:04.604 [2024-11-08 16:57:34.053382] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:04.604 [2024-11-08 16:57:34.053621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.576 "name": "raid_bdev1", 00:16:05.576 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:05.576 "strip_size_kb": 64, 00:16:05.576 "state": "online", 00:16:05.576 "raid_level": "raid5f", 00:16:05.576 "superblock": true, 00:16:05.576 "num_base_bdevs": 3, 00:16:05.576 "num_base_bdevs_discovered": 3, 00:16:05.576 "num_base_bdevs_operational": 3, 00:16:05.576 "base_bdevs_list": [ 00:16:05.576 { 00:16:05.576 "name": "spare", 00:16:05.576 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:05.576 "is_configured": true, 00:16:05.576 "data_offset": 2048, 00:16:05.576 "data_size": 63488 00:16:05.576 }, 00:16:05.576 { 00:16:05.576 "name": "BaseBdev2", 00:16:05.576 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:05.576 "is_configured": true, 00:16:05.576 "data_offset": 2048, 00:16:05.576 "data_size": 63488 00:16:05.576 }, 00:16:05.576 { 00:16:05.576 "name": "BaseBdev3", 00:16:05.576 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:05.576 "is_configured": true, 00:16:05.576 "data_offset": 2048, 00:16:05.576 "data_size": 63488 00:16:05.576 } 00:16:05.576 ] 00:16:05.576 }' 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.576 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.576 "name": "raid_bdev1", 00:16:05.576 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:05.576 "strip_size_kb": 64, 00:16:05.576 "state": "online", 00:16:05.576 "raid_level": "raid5f", 00:16:05.576 "superblock": true, 00:16:05.576 "num_base_bdevs": 3, 00:16:05.576 "num_base_bdevs_discovered": 3, 00:16:05.576 "num_base_bdevs_operational": 3, 00:16:05.576 "base_bdevs_list": [ 00:16:05.576 { 00:16:05.576 "name": "spare", 00:16:05.576 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:05.576 "is_configured": true, 00:16:05.576 "data_offset": 2048, 00:16:05.576 "data_size": 63488 00:16:05.576 }, 00:16:05.576 { 00:16:05.576 "name": "BaseBdev2", 00:16:05.576 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:05.576 "is_configured": true, 00:16:05.576 "data_offset": 2048, 00:16:05.576 "data_size": 63488 00:16:05.576 }, 00:16:05.577 { 00:16:05.577 "name": "BaseBdev3", 00:16:05.577 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:05.577 "is_configured": true, 00:16:05.577 "data_offset": 2048, 00:16:05.577 "data_size": 63488 00:16:05.577 } 00:16:05.577 ] 00:16:05.577 }' 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.577 16:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.577 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.577 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.577 "name": "raid_bdev1", 00:16:05.577 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:05.577 "strip_size_kb": 64, 00:16:05.577 "state": "online", 00:16:05.577 "raid_level": "raid5f", 00:16:05.577 "superblock": true, 00:16:05.577 "num_base_bdevs": 3, 00:16:05.577 "num_base_bdevs_discovered": 3, 00:16:05.577 "num_base_bdevs_operational": 3, 00:16:05.577 "base_bdevs_list": [ 00:16:05.577 { 00:16:05.577 "name": "spare", 00:16:05.577 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:05.577 "is_configured": true, 00:16:05.577 "data_offset": 2048, 00:16:05.577 "data_size": 63488 00:16:05.577 }, 00:16:05.577 { 00:16:05.577 "name": "BaseBdev2", 00:16:05.577 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:05.577 "is_configured": true, 00:16:05.577 "data_offset": 2048, 00:16:05.577 "data_size": 63488 00:16:05.577 }, 00:16:05.577 { 00:16:05.577 "name": "BaseBdev3", 00:16:05.577 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:05.577 "is_configured": true, 00:16:05.577 "data_offset": 2048, 00:16:05.577 "data_size": 63488 00:16:05.577 } 00:16:05.577 ] 00:16:05.577 }' 00:16:05.577 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.577 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.147 [2024-11-08 16:57:35.477404] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.147 [2024-11-08 16:57:35.477518] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.147 [2024-11-08 16:57:35.477673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.147 [2024-11-08 16:57:35.477833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.147 [2024-11-08 16:57:35.477890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.147 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:06.408 /dev/nbd0 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.408 1+0 records in 00:16:06.408 1+0 records out 00:16:06.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471137 s, 8.7 MB/s 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.408 16:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:06.667 /dev/nbd1 00:16:06.667 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.667 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.667 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:06.667 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:06.667 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.668 1+0 records in 00:16:06.668 1+0 records out 00:16:06.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510935 s, 8.0 MB/s 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.668 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:06.927 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:06.927 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.927 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.928 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.928 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:06.928 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.928 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.928 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.187 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.188 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.447 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.447 [2024-11-08 16:57:36.747618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.447 [2024-11-08 16:57:36.747727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.448 [2024-11-08 16:57:36.747761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:07.448 [2024-11-08 16:57:36.747772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.448 [2024-11-08 16:57:36.750401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.448 [2024-11-08 16:57:36.750448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.448 [2024-11-08 16:57:36.750552] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.448 [2024-11-08 16:57:36.750640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.448 [2024-11-08 16:57:36.750799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.448 [2024-11-08 16:57:36.750928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.448 spare 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 [2024-11-08 16:57:36.850870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:07.448 [2024-11-08 16:57:36.850928] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:07.448 [2024-11-08 16:57:36.851327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:16:07.448 [2024-11-08 16:57:36.852048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:07.448 [2024-11-08 16:57:36.852116] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:07.448 [2024-11-08 16:57:36.852383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.448 "name": "raid_bdev1", 00:16:07.448 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:07.448 "strip_size_kb": 64, 00:16:07.448 "state": "online", 00:16:07.448 "raid_level": "raid5f", 00:16:07.448 "superblock": true, 00:16:07.448 "num_base_bdevs": 3, 00:16:07.448 "num_base_bdevs_discovered": 3, 00:16:07.448 "num_base_bdevs_operational": 3, 00:16:07.448 "base_bdevs_list": [ 00:16:07.448 { 00:16:07.448 "name": "spare", 00:16:07.448 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:07.448 "is_configured": true, 00:16:07.448 "data_offset": 2048, 00:16:07.448 "data_size": 63488 00:16:07.448 }, 00:16:07.448 { 00:16:07.448 "name": "BaseBdev2", 00:16:07.448 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:07.448 "is_configured": true, 00:16:07.448 "data_offset": 2048, 00:16:07.448 "data_size": 63488 00:16:07.448 }, 00:16:07.448 { 00:16:07.448 "name": "BaseBdev3", 00:16:07.448 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:07.448 "is_configured": true, 00:16:07.448 "data_offset": 2048, 00:16:07.448 "data_size": 63488 00:16:07.448 } 00:16:07.448 ] 00:16:07.448 }' 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.448 16:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.017 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.017 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.017 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.017 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.017 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.018 "name": "raid_bdev1", 00:16:08.018 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:08.018 "strip_size_kb": 64, 00:16:08.018 "state": "online", 00:16:08.018 "raid_level": "raid5f", 00:16:08.018 "superblock": true, 00:16:08.018 "num_base_bdevs": 3, 00:16:08.018 "num_base_bdevs_discovered": 3, 00:16:08.018 "num_base_bdevs_operational": 3, 00:16:08.018 "base_bdevs_list": [ 00:16:08.018 { 00:16:08.018 "name": "spare", 00:16:08.018 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:08.018 "is_configured": true, 00:16:08.018 "data_offset": 2048, 00:16:08.018 "data_size": 63488 00:16:08.018 }, 00:16:08.018 { 00:16:08.018 "name": "BaseBdev2", 00:16:08.018 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:08.018 "is_configured": true, 00:16:08.018 "data_offset": 2048, 00:16:08.018 "data_size": 63488 00:16:08.018 }, 00:16:08.018 { 00:16:08.018 "name": "BaseBdev3", 00:16:08.018 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:08.018 "is_configured": true, 00:16:08.018 "data_offset": 2048, 00:16:08.018 "data_size": 63488 00:16:08.018 } 00:16:08.018 ] 00:16:08.018 }' 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.018 [2024-11-08 16:57:37.507531] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.018 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.277 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.277 "name": "raid_bdev1", 00:16:08.277 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:08.277 "strip_size_kb": 64, 00:16:08.277 "state": "online", 00:16:08.277 "raid_level": "raid5f", 00:16:08.277 "superblock": true, 00:16:08.277 "num_base_bdevs": 3, 00:16:08.277 "num_base_bdevs_discovered": 2, 00:16:08.277 "num_base_bdevs_operational": 2, 00:16:08.277 "base_bdevs_list": [ 00:16:08.277 { 00:16:08.277 "name": null, 00:16:08.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.277 "is_configured": false, 00:16:08.277 "data_offset": 0, 00:16:08.277 "data_size": 63488 00:16:08.277 }, 00:16:08.277 { 00:16:08.277 "name": "BaseBdev2", 00:16:08.277 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:08.277 "is_configured": true, 00:16:08.277 "data_offset": 2048, 00:16:08.277 "data_size": 63488 00:16:08.277 }, 00:16:08.277 { 00:16:08.277 "name": "BaseBdev3", 00:16:08.277 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:08.277 "is_configured": true, 00:16:08.277 "data_offset": 2048, 00:16:08.277 "data_size": 63488 00:16:08.277 } 00:16:08.277 ] 00:16:08.277 }' 00:16:08.277 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.277 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.536 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.536 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.536 16:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.536 [2024-11-08 16:57:38.002714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.536 [2024-11-08 16:57:38.003125] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.536 [2024-11-08 16:57:38.003146] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:08.536 [2024-11-08 16:57:38.003211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.536 [2024-11-08 16:57:38.010072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:16:08.536 16:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.536 16:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.536 [2024-11-08 16:57:38.012767] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.933 "name": "raid_bdev1", 00:16:09.933 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:09.933 "strip_size_kb": 64, 00:16:09.933 "state": "online", 00:16:09.933 "raid_level": "raid5f", 00:16:09.933 "superblock": true, 00:16:09.933 "num_base_bdevs": 3, 00:16:09.933 "num_base_bdevs_discovered": 3, 00:16:09.933 "num_base_bdevs_operational": 3, 00:16:09.933 "process": { 00:16:09.933 "type": "rebuild", 00:16:09.933 "target": "spare", 00:16:09.933 "progress": { 00:16:09.933 "blocks": 20480, 00:16:09.933 "percent": 16 00:16:09.933 } 00:16:09.933 }, 00:16:09.933 "base_bdevs_list": [ 00:16:09.933 { 00:16:09.933 "name": "spare", 00:16:09.933 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:09.933 "is_configured": true, 00:16:09.933 "data_offset": 2048, 00:16:09.933 "data_size": 63488 00:16:09.933 }, 00:16:09.933 { 00:16:09.933 "name": "BaseBdev2", 00:16:09.933 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:09.933 "is_configured": true, 00:16:09.933 "data_offset": 2048, 00:16:09.933 "data_size": 63488 00:16:09.933 }, 00:16:09.933 { 00:16:09.933 "name": "BaseBdev3", 00:16:09.933 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:09.933 "is_configured": true, 00:16:09.933 "data_offset": 2048, 00:16:09.933 "data_size": 63488 00:16:09.933 } 00:16:09.933 ] 00:16:09.933 }' 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.933 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.933 [2024-11-08 16:57:39.176384] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.934 [2024-11-08 16:57:39.225538] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.934 [2024-11-08 16:57:39.225805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.934 [2024-11-08 16:57:39.225839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.934 [2024-11-08 16:57:39.225851] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.934 "name": "raid_bdev1", 00:16:09.934 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:09.934 "strip_size_kb": 64, 00:16:09.934 "state": "online", 00:16:09.934 "raid_level": "raid5f", 00:16:09.934 "superblock": true, 00:16:09.934 "num_base_bdevs": 3, 00:16:09.934 "num_base_bdevs_discovered": 2, 00:16:09.934 "num_base_bdevs_operational": 2, 00:16:09.934 "base_bdevs_list": [ 00:16:09.934 { 00:16:09.934 "name": null, 00:16:09.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.934 "is_configured": false, 00:16:09.934 "data_offset": 0, 00:16:09.934 "data_size": 63488 00:16:09.934 }, 00:16:09.934 { 00:16:09.934 "name": "BaseBdev2", 00:16:09.934 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:09.934 "is_configured": true, 00:16:09.934 "data_offset": 2048, 00:16:09.934 "data_size": 63488 00:16:09.934 }, 00:16:09.934 { 00:16:09.934 "name": "BaseBdev3", 00:16:09.934 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:09.934 "is_configured": true, 00:16:09.934 "data_offset": 2048, 00:16:09.934 "data_size": 63488 00:16:09.934 } 00:16:09.934 ] 00:16:09.934 }' 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.934 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.194 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.194 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.194 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.453 [2024-11-08 16:57:39.722496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.453 [2024-11-08 16:57:39.722669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.453 [2024-11-08 16:57:39.722743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:10.453 [2024-11-08 16:57:39.722778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.453 [2024-11-08 16:57:39.723374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.453 [2024-11-08 16:57:39.723450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.453 [2024-11-08 16:57:39.723591] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.453 [2024-11-08 16:57:39.723658] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.453 [2024-11-08 16:57:39.723714] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:10.453 [2024-11-08 16:57:39.723785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.453 [2024-11-08 16:57:39.727704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:10.453 spare 00:16:10.453 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.453 16:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:10.453 [2024-11-08 16:57:39.730228] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.390 "name": "raid_bdev1", 00:16:11.390 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:11.390 "strip_size_kb": 64, 00:16:11.390 "state": "online", 00:16:11.390 "raid_level": "raid5f", 00:16:11.390 "superblock": true, 00:16:11.390 "num_base_bdevs": 3, 00:16:11.390 "num_base_bdevs_discovered": 3, 00:16:11.390 "num_base_bdevs_operational": 3, 00:16:11.390 "process": { 00:16:11.390 "type": "rebuild", 00:16:11.390 "target": "spare", 00:16:11.390 "progress": { 00:16:11.390 "blocks": 20480, 00:16:11.390 "percent": 16 00:16:11.390 } 00:16:11.390 }, 00:16:11.390 "base_bdevs_list": [ 00:16:11.390 { 00:16:11.390 "name": "spare", 00:16:11.390 "uuid": "351cc229-50f3-5ef0-8c6b-760bf03bc815", 00:16:11.390 "is_configured": true, 00:16:11.390 "data_offset": 2048, 00:16:11.390 "data_size": 63488 00:16:11.390 }, 00:16:11.390 { 00:16:11.390 "name": "BaseBdev2", 00:16:11.390 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:11.390 "is_configured": true, 00:16:11.390 "data_offset": 2048, 00:16:11.390 "data_size": 63488 00:16:11.390 }, 00:16:11.390 { 00:16:11.390 "name": "BaseBdev3", 00:16:11.390 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:11.390 "is_configured": true, 00:16:11.390 "data_offset": 2048, 00:16:11.390 "data_size": 63488 00:16:11.390 } 00:16:11.390 ] 00:16:11.390 }' 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.390 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.390 [2024-11-08 16:57:40.886270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.648 [2024-11-08 16:57:40.941564] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.648 [2024-11-08 16:57:40.941796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.648 [2024-11-08 16:57:40.941845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.648 [2024-11-08 16:57:40.941881] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.648 16:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.648 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.648 "name": "raid_bdev1", 00:16:11.648 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:11.648 "strip_size_kb": 64, 00:16:11.648 "state": "online", 00:16:11.648 "raid_level": "raid5f", 00:16:11.648 "superblock": true, 00:16:11.648 "num_base_bdevs": 3, 00:16:11.648 "num_base_bdevs_discovered": 2, 00:16:11.648 "num_base_bdevs_operational": 2, 00:16:11.648 "base_bdevs_list": [ 00:16:11.648 { 00:16:11.648 "name": null, 00:16:11.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.648 "is_configured": false, 00:16:11.648 "data_offset": 0, 00:16:11.648 "data_size": 63488 00:16:11.648 }, 00:16:11.648 { 00:16:11.648 "name": "BaseBdev2", 00:16:11.648 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:11.648 "is_configured": true, 00:16:11.648 "data_offset": 2048, 00:16:11.648 "data_size": 63488 00:16:11.648 }, 00:16:11.648 { 00:16:11.648 "name": "BaseBdev3", 00:16:11.648 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:11.648 "is_configured": true, 00:16:11.648 "data_offset": 2048, 00:16:11.648 "data_size": 63488 00:16:11.648 } 00:16:11.648 ] 00:16:11.648 }' 00:16:11.648 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.648 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.216 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.217 "name": "raid_bdev1", 00:16:12.217 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:12.217 "strip_size_kb": 64, 00:16:12.217 "state": "online", 00:16:12.217 "raid_level": "raid5f", 00:16:12.217 "superblock": true, 00:16:12.217 "num_base_bdevs": 3, 00:16:12.217 "num_base_bdevs_discovered": 2, 00:16:12.217 "num_base_bdevs_operational": 2, 00:16:12.217 "base_bdevs_list": [ 00:16:12.217 { 00:16:12.217 "name": null, 00:16:12.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.217 "is_configured": false, 00:16:12.217 "data_offset": 0, 00:16:12.217 "data_size": 63488 00:16:12.217 }, 00:16:12.217 { 00:16:12.217 "name": "BaseBdev2", 00:16:12.217 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:12.217 "is_configured": true, 00:16:12.217 "data_offset": 2048, 00:16:12.217 "data_size": 63488 00:16:12.217 }, 00:16:12.217 { 00:16:12.217 "name": "BaseBdev3", 00:16:12.217 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:12.217 "is_configured": true, 00:16:12.217 "data_offset": 2048, 00:16:12.217 "data_size": 63488 00:16:12.217 } 00:16:12.217 ] 00:16:12.217 }' 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.217 [2024-11-08 16:57:41.622317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:12.217 [2024-11-08 16:57:41.622466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.217 [2024-11-08 16:57:41.622502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:12.217 [2024-11-08 16:57:41.622516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.217 [2024-11-08 16:57:41.622996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.217 [2024-11-08 16:57:41.623022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:12.217 [2024-11-08 16:57:41.623107] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:12.217 [2024-11-08 16:57:41.623128] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:12.217 [2024-11-08 16:57:41.623138] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.217 [2024-11-08 16:57:41.623170] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:12.217 BaseBdev1 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.217 16:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.151 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.408 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.408 "name": "raid_bdev1", 00:16:13.408 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:13.408 "strip_size_kb": 64, 00:16:13.408 "state": "online", 00:16:13.408 "raid_level": "raid5f", 00:16:13.408 "superblock": true, 00:16:13.408 "num_base_bdevs": 3, 00:16:13.408 "num_base_bdevs_discovered": 2, 00:16:13.408 "num_base_bdevs_operational": 2, 00:16:13.408 "base_bdevs_list": [ 00:16:13.408 { 00:16:13.408 "name": null, 00:16:13.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.408 "is_configured": false, 00:16:13.408 "data_offset": 0, 00:16:13.408 "data_size": 63488 00:16:13.408 }, 00:16:13.408 { 00:16:13.408 "name": "BaseBdev2", 00:16:13.408 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:13.408 "is_configured": true, 00:16:13.408 "data_offset": 2048, 00:16:13.408 "data_size": 63488 00:16:13.408 }, 00:16:13.408 { 00:16:13.408 "name": "BaseBdev3", 00:16:13.408 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:13.408 "is_configured": true, 00:16:13.408 "data_offset": 2048, 00:16:13.408 "data_size": 63488 00:16:13.408 } 00:16:13.408 ] 00:16:13.408 }' 00:16:13.408 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.408 16:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.667 "name": "raid_bdev1", 00:16:13.667 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:13.667 "strip_size_kb": 64, 00:16:13.667 "state": "online", 00:16:13.667 "raid_level": "raid5f", 00:16:13.667 "superblock": true, 00:16:13.667 "num_base_bdevs": 3, 00:16:13.667 "num_base_bdevs_discovered": 2, 00:16:13.667 "num_base_bdevs_operational": 2, 00:16:13.667 "base_bdevs_list": [ 00:16:13.667 { 00:16:13.667 "name": null, 00:16:13.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.667 "is_configured": false, 00:16:13.667 "data_offset": 0, 00:16:13.667 "data_size": 63488 00:16:13.667 }, 00:16:13.667 { 00:16:13.667 "name": "BaseBdev2", 00:16:13.667 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:13.667 "is_configured": true, 00:16:13.667 "data_offset": 2048, 00:16:13.667 "data_size": 63488 00:16:13.667 }, 00:16:13.667 { 00:16:13.667 "name": "BaseBdev3", 00:16:13.667 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:13.667 "is_configured": true, 00:16:13.667 "data_offset": 2048, 00:16:13.667 "data_size": 63488 00:16:13.667 } 00:16:13.667 ] 00:16:13.667 }' 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.667 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.925 [2024-11-08 16:57:43.211717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.925 [2024-11-08 16:57:43.211985] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.925 [2024-11-08 16:57:43.212066] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.925 request: 00:16:13.925 { 00:16:13.925 "base_bdev": "BaseBdev1", 00:16:13.925 "raid_bdev": "raid_bdev1", 00:16:13.925 "method": "bdev_raid_add_base_bdev", 00:16:13.925 "req_id": 1 00:16:13.925 } 00:16:13.925 Got JSON-RPC error response 00:16:13.925 response: 00:16:13.925 { 00:16:13.925 "code": -22, 00:16:13.925 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.925 } 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.925 16:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.861 "name": "raid_bdev1", 00:16:14.861 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:14.861 "strip_size_kb": 64, 00:16:14.861 "state": "online", 00:16:14.861 "raid_level": "raid5f", 00:16:14.861 "superblock": true, 00:16:14.861 "num_base_bdevs": 3, 00:16:14.861 "num_base_bdevs_discovered": 2, 00:16:14.861 "num_base_bdevs_operational": 2, 00:16:14.861 "base_bdevs_list": [ 00:16:14.861 { 00:16:14.861 "name": null, 00:16:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.861 "is_configured": false, 00:16:14.861 "data_offset": 0, 00:16:14.861 "data_size": 63488 00:16:14.861 }, 00:16:14.861 { 00:16:14.861 "name": "BaseBdev2", 00:16:14.861 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:14.861 "is_configured": true, 00:16:14.861 "data_offset": 2048, 00:16:14.861 "data_size": 63488 00:16:14.861 }, 00:16:14.861 { 00:16:14.861 "name": "BaseBdev3", 00:16:14.861 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:14.861 "is_configured": true, 00:16:14.861 "data_offset": 2048, 00:16:14.861 "data_size": 63488 00:16:14.861 } 00:16:14.861 ] 00:16:14.861 }' 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.861 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.429 "name": "raid_bdev1", 00:16:15.429 "uuid": "f4689284-41ef-40e3-8030-d8bb73d64d9d", 00:16:15.429 "strip_size_kb": 64, 00:16:15.429 "state": "online", 00:16:15.429 "raid_level": "raid5f", 00:16:15.429 "superblock": true, 00:16:15.429 "num_base_bdevs": 3, 00:16:15.429 "num_base_bdevs_discovered": 2, 00:16:15.429 "num_base_bdevs_operational": 2, 00:16:15.429 "base_bdevs_list": [ 00:16:15.429 { 00:16:15.429 "name": null, 00:16:15.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.429 "is_configured": false, 00:16:15.429 "data_offset": 0, 00:16:15.429 "data_size": 63488 00:16:15.429 }, 00:16:15.429 { 00:16:15.429 "name": "BaseBdev2", 00:16:15.429 "uuid": "7898cbb8-3a47-5740-a535-af92c1e035d5", 00:16:15.429 "is_configured": true, 00:16:15.429 "data_offset": 2048, 00:16:15.429 "data_size": 63488 00:16:15.429 }, 00:16:15.429 { 00:16:15.429 "name": "BaseBdev3", 00:16:15.429 "uuid": "4d40d3b7-a975-5604-a783-a06f2b123cc7", 00:16:15.429 "is_configured": true, 00:16:15.429 "data_offset": 2048, 00:16:15.429 "data_size": 63488 00:16:15.429 } 00:16:15.429 ] 00:16:15.429 }' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92605 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92605 ']' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92605 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92605 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92605' 00:16:15.429 killing process with pid 92605 00:16:15.429 Received shutdown signal, test time was about 60.000000 seconds 00:16:15.429 00:16:15.429 Latency(us) 00:16:15.429 [2024-11-08T16:57:44.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.429 [2024-11-08T16:57:44.957Z] =================================================================================================================== 00:16:15.429 [2024-11-08T16:57:44.957Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92605 00:16:15.429 [2024-11-08 16:57:44.945328] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.429 16:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92605 00:16:15.429 [2024-11-08 16:57:44.945513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.429 [2024-11-08 16:57:44.945615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.429 [2024-11-08 16:57:44.945736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:15.688 [2024-11-08 16:57:44.995252] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.947 16:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.948 00:16:15.948 real 0m22.520s 00:16:15.948 user 0m29.575s 00:16:15.948 sys 0m2.960s 00:16:15.948 16:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.948 16:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.948 ************************************ 00:16:15.948 END TEST raid5f_rebuild_test_sb 00:16:15.948 ************************************ 00:16:15.948 16:57:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:15.948 16:57:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:15.948 16:57:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:15.948 16:57:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.948 16:57:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.948 ************************************ 00:16:15.948 START TEST raid5f_state_function_test 00:16:15.948 ************************************ 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93346 00:16:15.948 Process raid pid: 93346 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93346' 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93346 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93346 ']' 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.948 16:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.948 [2024-11-08 16:57:45.389340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:15.948 [2024-11-08 16:57:45.389506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.207 [2024-11-08 16:57:45.547439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.207 [2024-11-08 16:57:45.602941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.207 [2024-11-08 16:57:45.648258] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.207 [2024-11-08 16:57:45.648304] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.774 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.774 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:16.774 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.774 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.774 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.033 [2024-11-08 16:57:46.303479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.033 [2024-11-08 16:57:46.303553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.033 [2024-11-08 16:57:46.303567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.033 [2024-11-08 16:57:46.303579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.033 [2024-11-08 16:57:46.303587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.033 [2024-11-08 16:57:46.303602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.033 [2024-11-08 16:57:46.303610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.033 [2024-11-08 16:57:46.303620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.033 "name": "Existed_Raid", 00:16:17.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.033 "strip_size_kb": 64, 00:16:17.033 "state": "configuring", 00:16:17.033 "raid_level": "raid5f", 00:16:17.033 "superblock": false, 00:16:17.033 "num_base_bdevs": 4, 00:16:17.033 "num_base_bdevs_discovered": 0, 00:16:17.033 "num_base_bdevs_operational": 4, 00:16:17.033 "base_bdevs_list": [ 00:16:17.033 { 00:16:17.033 "name": "BaseBdev1", 00:16:17.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.033 "is_configured": false, 00:16:17.033 "data_offset": 0, 00:16:17.033 "data_size": 0 00:16:17.033 }, 00:16:17.033 { 00:16:17.033 "name": "BaseBdev2", 00:16:17.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.033 "is_configured": false, 00:16:17.033 "data_offset": 0, 00:16:17.033 "data_size": 0 00:16:17.033 }, 00:16:17.033 { 00:16:17.033 "name": "BaseBdev3", 00:16:17.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.033 "is_configured": false, 00:16:17.033 "data_offset": 0, 00:16:17.033 "data_size": 0 00:16:17.033 }, 00:16:17.033 { 00:16:17.033 "name": "BaseBdev4", 00:16:17.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.033 "is_configured": false, 00:16:17.033 "data_offset": 0, 00:16:17.033 "data_size": 0 00:16:17.033 } 00:16:17.033 ] 00:16:17.033 }' 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.033 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 [2024-11-08 16:57:46.827282] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.601 [2024-11-08 16:57:46.827401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 [2024-11-08 16:57:46.839346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.601 [2024-11-08 16:57:46.839484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.601 [2024-11-08 16:57:46.839524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.601 [2024-11-08 16:57:46.839553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.601 [2024-11-08 16:57:46.839607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.601 [2024-11-08 16:57:46.839663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.601 [2024-11-08 16:57:46.839696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.601 [2024-11-08 16:57:46.839733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 [2024-11-08 16:57:46.861159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.601 BaseBdev1 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 [ 00:16:17.601 { 00:16:17.601 "name": "BaseBdev1", 00:16:17.601 "aliases": [ 00:16:17.601 "cc94158f-f528-4da1-b798-e899ca697993" 00:16:17.601 ], 00:16:17.601 "product_name": "Malloc disk", 00:16:17.601 "block_size": 512, 00:16:17.601 "num_blocks": 65536, 00:16:17.601 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:17.601 "assigned_rate_limits": { 00:16:17.601 "rw_ios_per_sec": 0, 00:16:17.601 "rw_mbytes_per_sec": 0, 00:16:17.601 "r_mbytes_per_sec": 0, 00:16:17.601 "w_mbytes_per_sec": 0 00:16:17.601 }, 00:16:17.601 "claimed": true, 00:16:17.601 "claim_type": "exclusive_write", 00:16:17.601 "zoned": false, 00:16:17.601 "supported_io_types": { 00:16:17.601 "read": true, 00:16:17.601 "write": true, 00:16:17.601 "unmap": true, 00:16:17.601 "flush": true, 00:16:17.601 "reset": true, 00:16:17.601 "nvme_admin": false, 00:16:17.601 "nvme_io": false, 00:16:17.601 "nvme_io_md": false, 00:16:17.601 "write_zeroes": true, 00:16:17.601 "zcopy": true, 00:16:17.601 "get_zone_info": false, 00:16:17.601 "zone_management": false, 00:16:17.601 "zone_append": false, 00:16:17.601 "compare": false, 00:16:17.601 "compare_and_write": false, 00:16:17.601 "abort": true, 00:16:17.601 "seek_hole": false, 00:16:17.601 "seek_data": false, 00:16:17.601 "copy": true, 00:16:17.601 "nvme_iov_md": false 00:16:17.601 }, 00:16:17.601 "memory_domains": [ 00:16:17.601 { 00:16:17.601 "dma_device_id": "system", 00:16:17.601 "dma_device_type": 1 00:16:17.601 }, 00:16:17.601 { 00:16:17.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.601 "dma_device_type": 2 00:16:17.601 } 00:16:17.601 ], 00:16:17.601 "driver_specific": {} 00:16:17.601 } 00:16:17.601 ] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.601 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.601 "name": "Existed_Raid", 00:16:17.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.601 "strip_size_kb": 64, 00:16:17.601 "state": "configuring", 00:16:17.601 "raid_level": "raid5f", 00:16:17.601 "superblock": false, 00:16:17.601 "num_base_bdevs": 4, 00:16:17.601 "num_base_bdevs_discovered": 1, 00:16:17.601 "num_base_bdevs_operational": 4, 00:16:17.601 "base_bdevs_list": [ 00:16:17.601 { 00:16:17.601 "name": "BaseBdev1", 00:16:17.601 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:17.601 "is_configured": true, 00:16:17.601 "data_offset": 0, 00:16:17.601 "data_size": 65536 00:16:17.601 }, 00:16:17.601 { 00:16:17.601 "name": "BaseBdev2", 00:16:17.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.601 "is_configured": false, 00:16:17.601 "data_offset": 0, 00:16:17.601 "data_size": 0 00:16:17.601 }, 00:16:17.601 { 00:16:17.601 "name": "BaseBdev3", 00:16:17.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.602 "is_configured": false, 00:16:17.602 "data_offset": 0, 00:16:17.602 "data_size": 0 00:16:17.602 }, 00:16:17.602 { 00:16:17.602 "name": "BaseBdev4", 00:16:17.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.602 "is_configured": false, 00:16:17.602 "data_offset": 0, 00:16:17.602 "data_size": 0 00:16:17.602 } 00:16:17.602 ] 00:16:17.602 }' 00:16:17.602 16:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.602 16:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.917 [2024-11-08 16:57:47.320460] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.917 [2024-11-08 16:57:47.320599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.917 [2024-11-08 16:57:47.332525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.917 [2024-11-08 16:57:47.334858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.917 [2024-11-08 16:57:47.334960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.917 [2024-11-08 16:57:47.335004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.917 [2024-11-08 16:57:47.335043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.917 [2024-11-08 16:57:47.335094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.917 [2024-11-08 16:57:47.335121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.917 "name": "Existed_Raid", 00:16:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.917 "strip_size_kb": 64, 00:16:17.917 "state": "configuring", 00:16:17.917 "raid_level": "raid5f", 00:16:17.917 "superblock": false, 00:16:17.917 "num_base_bdevs": 4, 00:16:17.917 "num_base_bdevs_discovered": 1, 00:16:17.917 "num_base_bdevs_operational": 4, 00:16:17.917 "base_bdevs_list": [ 00:16:17.917 { 00:16:17.917 "name": "BaseBdev1", 00:16:17.917 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:17.917 "is_configured": true, 00:16:17.917 "data_offset": 0, 00:16:17.917 "data_size": 65536 00:16:17.917 }, 00:16:17.917 { 00:16:17.917 "name": "BaseBdev2", 00:16:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.917 "is_configured": false, 00:16:17.917 "data_offset": 0, 00:16:17.917 "data_size": 0 00:16:17.917 }, 00:16:17.917 { 00:16:17.917 "name": "BaseBdev3", 00:16:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.917 "is_configured": false, 00:16:17.917 "data_offset": 0, 00:16:17.917 "data_size": 0 00:16:17.917 }, 00:16:17.917 { 00:16:17.917 "name": "BaseBdev4", 00:16:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.917 "is_configured": false, 00:16:17.917 "data_offset": 0, 00:16:17.917 "data_size": 0 00:16:17.917 } 00:16:17.917 ] 00:16:17.917 }' 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.917 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.485 [2024-11-08 16:57:47.778171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.485 BaseBdev2 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.485 [ 00:16:18.485 { 00:16:18.485 "name": "BaseBdev2", 00:16:18.485 "aliases": [ 00:16:18.485 "9a76b50d-cb8b-454e-a842-0dbaa281d105" 00:16:18.485 ], 00:16:18.485 "product_name": "Malloc disk", 00:16:18.485 "block_size": 512, 00:16:18.485 "num_blocks": 65536, 00:16:18.485 "uuid": "9a76b50d-cb8b-454e-a842-0dbaa281d105", 00:16:18.485 "assigned_rate_limits": { 00:16:18.485 "rw_ios_per_sec": 0, 00:16:18.485 "rw_mbytes_per_sec": 0, 00:16:18.485 "r_mbytes_per_sec": 0, 00:16:18.485 "w_mbytes_per_sec": 0 00:16:18.485 }, 00:16:18.485 "claimed": true, 00:16:18.485 "claim_type": "exclusive_write", 00:16:18.485 "zoned": false, 00:16:18.485 "supported_io_types": { 00:16:18.485 "read": true, 00:16:18.485 "write": true, 00:16:18.485 "unmap": true, 00:16:18.485 "flush": true, 00:16:18.485 "reset": true, 00:16:18.485 "nvme_admin": false, 00:16:18.485 "nvme_io": false, 00:16:18.485 "nvme_io_md": false, 00:16:18.485 "write_zeroes": true, 00:16:18.485 "zcopy": true, 00:16:18.485 "get_zone_info": false, 00:16:18.485 "zone_management": false, 00:16:18.485 "zone_append": false, 00:16:18.485 "compare": false, 00:16:18.485 "compare_and_write": false, 00:16:18.485 "abort": true, 00:16:18.485 "seek_hole": false, 00:16:18.485 "seek_data": false, 00:16:18.485 "copy": true, 00:16:18.485 "nvme_iov_md": false 00:16:18.485 }, 00:16:18.485 "memory_domains": [ 00:16:18.485 { 00:16:18.485 "dma_device_id": "system", 00:16:18.485 "dma_device_type": 1 00:16:18.485 }, 00:16:18.485 { 00:16:18.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.485 "dma_device_type": 2 00:16:18.485 } 00:16:18.485 ], 00:16:18.485 "driver_specific": {} 00:16:18.485 } 00:16:18.485 ] 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.485 "name": "Existed_Raid", 00:16:18.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.485 "strip_size_kb": 64, 00:16:18.485 "state": "configuring", 00:16:18.485 "raid_level": "raid5f", 00:16:18.485 "superblock": false, 00:16:18.485 "num_base_bdevs": 4, 00:16:18.485 "num_base_bdevs_discovered": 2, 00:16:18.485 "num_base_bdevs_operational": 4, 00:16:18.485 "base_bdevs_list": [ 00:16:18.485 { 00:16:18.485 "name": "BaseBdev1", 00:16:18.485 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:18.485 "is_configured": true, 00:16:18.485 "data_offset": 0, 00:16:18.485 "data_size": 65536 00:16:18.485 }, 00:16:18.485 { 00:16:18.485 "name": "BaseBdev2", 00:16:18.485 "uuid": "9a76b50d-cb8b-454e-a842-0dbaa281d105", 00:16:18.485 "is_configured": true, 00:16:18.485 "data_offset": 0, 00:16:18.485 "data_size": 65536 00:16:18.485 }, 00:16:18.485 { 00:16:18.485 "name": "BaseBdev3", 00:16:18.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.485 "is_configured": false, 00:16:18.485 "data_offset": 0, 00:16:18.485 "data_size": 0 00:16:18.485 }, 00:16:18.485 { 00:16:18.485 "name": "BaseBdev4", 00:16:18.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.485 "is_configured": false, 00:16:18.485 "data_offset": 0, 00:16:18.485 "data_size": 0 00:16:18.485 } 00:16:18.485 ] 00:16:18.485 }' 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.485 16:57:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.054 [2024-11-08 16:57:48.284798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.054 BaseBdev3 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.054 [ 00:16:19.054 { 00:16:19.054 "name": "BaseBdev3", 00:16:19.054 "aliases": [ 00:16:19.054 "a37c3a60-4deb-4cf4-a507-c1efe03f2fc8" 00:16:19.054 ], 00:16:19.054 "product_name": "Malloc disk", 00:16:19.054 "block_size": 512, 00:16:19.054 "num_blocks": 65536, 00:16:19.054 "uuid": "a37c3a60-4deb-4cf4-a507-c1efe03f2fc8", 00:16:19.054 "assigned_rate_limits": { 00:16:19.054 "rw_ios_per_sec": 0, 00:16:19.054 "rw_mbytes_per_sec": 0, 00:16:19.054 "r_mbytes_per_sec": 0, 00:16:19.054 "w_mbytes_per_sec": 0 00:16:19.054 }, 00:16:19.054 "claimed": true, 00:16:19.054 "claim_type": "exclusive_write", 00:16:19.054 "zoned": false, 00:16:19.054 "supported_io_types": { 00:16:19.054 "read": true, 00:16:19.054 "write": true, 00:16:19.054 "unmap": true, 00:16:19.054 "flush": true, 00:16:19.054 "reset": true, 00:16:19.054 "nvme_admin": false, 00:16:19.054 "nvme_io": false, 00:16:19.054 "nvme_io_md": false, 00:16:19.054 "write_zeroes": true, 00:16:19.054 "zcopy": true, 00:16:19.054 "get_zone_info": false, 00:16:19.054 "zone_management": false, 00:16:19.054 "zone_append": false, 00:16:19.054 "compare": false, 00:16:19.054 "compare_and_write": false, 00:16:19.054 "abort": true, 00:16:19.054 "seek_hole": false, 00:16:19.054 "seek_data": false, 00:16:19.054 "copy": true, 00:16:19.054 "nvme_iov_md": false 00:16:19.054 }, 00:16:19.054 "memory_domains": [ 00:16:19.054 { 00:16:19.054 "dma_device_id": "system", 00:16:19.054 "dma_device_type": 1 00:16:19.054 }, 00:16:19.054 { 00:16:19.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.054 "dma_device_type": 2 00:16:19.054 } 00:16:19.054 ], 00:16:19.054 "driver_specific": {} 00:16:19.054 } 00:16:19.054 ] 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.054 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.055 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.055 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.055 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.055 "name": "Existed_Raid", 00:16:19.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.055 "strip_size_kb": 64, 00:16:19.055 "state": "configuring", 00:16:19.055 "raid_level": "raid5f", 00:16:19.055 "superblock": false, 00:16:19.055 "num_base_bdevs": 4, 00:16:19.055 "num_base_bdevs_discovered": 3, 00:16:19.055 "num_base_bdevs_operational": 4, 00:16:19.055 "base_bdevs_list": [ 00:16:19.055 { 00:16:19.055 "name": "BaseBdev1", 00:16:19.055 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:19.055 "is_configured": true, 00:16:19.055 "data_offset": 0, 00:16:19.055 "data_size": 65536 00:16:19.055 }, 00:16:19.055 { 00:16:19.055 "name": "BaseBdev2", 00:16:19.055 "uuid": "9a76b50d-cb8b-454e-a842-0dbaa281d105", 00:16:19.055 "is_configured": true, 00:16:19.055 "data_offset": 0, 00:16:19.055 "data_size": 65536 00:16:19.055 }, 00:16:19.055 { 00:16:19.055 "name": "BaseBdev3", 00:16:19.055 "uuid": "a37c3a60-4deb-4cf4-a507-c1efe03f2fc8", 00:16:19.055 "is_configured": true, 00:16:19.055 "data_offset": 0, 00:16:19.055 "data_size": 65536 00:16:19.055 }, 00:16:19.055 { 00:16:19.055 "name": "BaseBdev4", 00:16:19.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.055 "is_configured": false, 00:16:19.055 "data_offset": 0, 00:16:19.055 "data_size": 0 00:16:19.055 } 00:16:19.055 ] 00:16:19.055 }' 00:16:19.055 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.055 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.314 [2024-11-08 16:57:48.791402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.314 [2024-11-08 16:57:48.791544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:19.314 [2024-11-08 16:57:48.791575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:19.314 [2024-11-08 16:57:48.791966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:19.314 BaseBdev4 00:16:19.314 [2024-11-08 16:57:48.792509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:19.314 [2024-11-08 16:57:48.792544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:19.314 [2024-11-08 16:57:48.792796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.314 [ 00:16:19.314 { 00:16:19.314 "name": "BaseBdev4", 00:16:19.314 "aliases": [ 00:16:19.314 "4c26985f-91d0-4cf8-8cb0-82e98fdcd8c7" 00:16:19.314 ], 00:16:19.314 "product_name": "Malloc disk", 00:16:19.314 "block_size": 512, 00:16:19.314 "num_blocks": 65536, 00:16:19.314 "uuid": "4c26985f-91d0-4cf8-8cb0-82e98fdcd8c7", 00:16:19.314 "assigned_rate_limits": { 00:16:19.314 "rw_ios_per_sec": 0, 00:16:19.314 "rw_mbytes_per_sec": 0, 00:16:19.314 "r_mbytes_per_sec": 0, 00:16:19.314 "w_mbytes_per_sec": 0 00:16:19.314 }, 00:16:19.314 "claimed": true, 00:16:19.314 "claim_type": "exclusive_write", 00:16:19.314 "zoned": false, 00:16:19.314 "supported_io_types": { 00:16:19.314 "read": true, 00:16:19.314 "write": true, 00:16:19.314 "unmap": true, 00:16:19.314 "flush": true, 00:16:19.314 "reset": true, 00:16:19.314 "nvme_admin": false, 00:16:19.314 "nvme_io": false, 00:16:19.314 "nvme_io_md": false, 00:16:19.314 "write_zeroes": true, 00:16:19.314 "zcopy": true, 00:16:19.314 "get_zone_info": false, 00:16:19.314 "zone_management": false, 00:16:19.314 "zone_append": false, 00:16:19.314 "compare": false, 00:16:19.314 "compare_and_write": false, 00:16:19.314 "abort": true, 00:16:19.314 "seek_hole": false, 00:16:19.314 "seek_data": false, 00:16:19.314 "copy": true, 00:16:19.314 "nvme_iov_md": false 00:16:19.314 }, 00:16:19.314 "memory_domains": [ 00:16:19.314 { 00:16:19.314 "dma_device_id": "system", 00:16:19.314 "dma_device_type": 1 00:16:19.314 }, 00:16:19.314 { 00:16:19.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.314 "dma_device_type": 2 00:16:19.314 } 00:16:19.314 ], 00:16:19.314 "driver_specific": {} 00:16:19.314 } 00:16:19.314 ] 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.314 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.572 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.572 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.572 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.572 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.572 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.572 "name": "Existed_Raid", 00:16:19.572 "uuid": "c04f984e-ff45-4993-a795-77df02ca045f", 00:16:19.572 "strip_size_kb": 64, 00:16:19.572 "state": "online", 00:16:19.572 "raid_level": "raid5f", 00:16:19.572 "superblock": false, 00:16:19.572 "num_base_bdevs": 4, 00:16:19.572 "num_base_bdevs_discovered": 4, 00:16:19.572 "num_base_bdevs_operational": 4, 00:16:19.572 "base_bdevs_list": [ 00:16:19.572 { 00:16:19.573 "name": "BaseBdev1", 00:16:19.573 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:19.573 "is_configured": true, 00:16:19.573 "data_offset": 0, 00:16:19.573 "data_size": 65536 00:16:19.573 }, 00:16:19.573 { 00:16:19.573 "name": "BaseBdev2", 00:16:19.573 "uuid": "9a76b50d-cb8b-454e-a842-0dbaa281d105", 00:16:19.573 "is_configured": true, 00:16:19.573 "data_offset": 0, 00:16:19.573 "data_size": 65536 00:16:19.573 }, 00:16:19.573 { 00:16:19.573 "name": "BaseBdev3", 00:16:19.573 "uuid": "a37c3a60-4deb-4cf4-a507-c1efe03f2fc8", 00:16:19.573 "is_configured": true, 00:16:19.573 "data_offset": 0, 00:16:19.573 "data_size": 65536 00:16:19.573 }, 00:16:19.573 { 00:16:19.573 "name": "BaseBdev4", 00:16:19.573 "uuid": "4c26985f-91d0-4cf8-8cb0-82e98fdcd8c7", 00:16:19.573 "is_configured": true, 00:16:19.573 "data_offset": 0, 00:16:19.573 "data_size": 65536 00:16:19.573 } 00:16:19.573 ] 00:16:19.573 }' 00:16:19.573 16:57:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.573 16:57:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.832 [2024-11-08 16:57:49.262972] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.832 "name": "Existed_Raid", 00:16:19.832 "aliases": [ 00:16:19.832 "c04f984e-ff45-4993-a795-77df02ca045f" 00:16:19.832 ], 00:16:19.832 "product_name": "Raid Volume", 00:16:19.832 "block_size": 512, 00:16:19.832 "num_blocks": 196608, 00:16:19.832 "uuid": "c04f984e-ff45-4993-a795-77df02ca045f", 00:16:19.832 "assigned_rate_limits": { 00:16:19.832 "rw_ios_per_sec": 0, 00:16:19.832 "rw_mbytes_per_sec": 0, 00:16:19.832 "r_mbytes_per_sec": 0, 00:16:19.832 "w_mbytes_per_sec": 0 00:16:19.832 }, 00:16:19.832 "claimed": false, 00:16:19.832 "zoned": false, 00:16:19.832 "supported_io_types": { 00:16:19.832 "read": true, 00:16:19.832 "write": true, 00:16:19.832 "unmap": false, 00:16:19.832 "flush": false, 00:16:19.832 "reset": true, 00:16:19.832 "nvme_admin": false, 00:16:19.832 "nvme_io": false, 00:16:19.832 "nvme_io_md": false, 00:16:19.832 "write_zeroes": true, 00:16:19.832 "zcopy": false, 00:16:19.832 "get_zone_info": false, 00:16:19.832 "zone_management": false, 00:16:19.832 "zone_append": false, 00:16:19.832 "compare": false, 00:16:19.832 "compare_and_write": false, 00:16:19.832 "abort": false, 00:16:19.832 "seek_hole": false, 00:16:19.832 "seek_data": false, 00:16:19.832 "copy": false, 00:16:19.832 "nvme_iov_md": false 00:16:19.832 }, 00:16:19.832 "driver_specific": { 00:16:19.832 "raid": { 00:16:19.832 "uuid": "c04f984e-ff45-4993-a795-77df02ca045f", 00:16:19.832 "strip_size_kb": 64, 00:16:19.832 "state": "online", 00:16:19.832 "raid_level": "raid5f", 00:16:19.832 "superblock": false, 00:16:19.832 "num_base_bdevs": 4, 00:16:19.832 "num_base_bdevs_discovered": 4, 00:16:19.832 "num_base_bdevs_operational": 4, 00:16:19.832 "base_bdevs_list": [ 00:16:19.832 { 00:16:19.832 "name": "BaseBdev1", 00:16:19.832 "uuid": "cc94158f-f528-4da1-b798-e899ca697993", 00:16:19.832 "is_configured": true, 00:16:19.832 "data_offset": 0, 00:16:19.832 "data_size": 65536 00:16:19.832 }, 00:16:19.832 { 00:16:19.832 "name": "BaseBdev2", 00:16:19.832 "uuid": "9a76b50d-cb8b-454e-a842-0dbaa281d105", 00:16:19.832 "is_configured": true, 00:16:19.832 "data_offset": 0, 00:16:19.832 "data_size": 65536 00:16:19.832 }, 00:16:19.832 { 00:16:19.832 "name": "BaseBdev3", 00:16:19.832 "uuid": "a37c3a60-4deb-4cf4-a507-c1efe03f2fc8", 00:16:19.832 "is_configured": true, 00:16:19.832 "data_offset": 0, 00:16:19.832 "data_size": 65536 00:16:19.832 }, 00:16:19.832 { 00:16:19.832 "name": "BaseBdev4", 00:16:19.832 "uuid": "4c26985f-91d0-4cf8-8cb0-82e98fdcd8c7", 00:16:19.832 "is_configured": true, 00:16:19.832 "data_offset": 0, 00:16:19.832 "data_size": 65536 00:16:19.832 } 00:16:19.832 ] 00:16:19.832 } 00:16:19.832 } 00:16:19.832 }' 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:19.832 BaseBdev2 00:16:19.832 BaseBdev3 00:16:19.832 BaseBdev4' 00:16:19.832 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.091 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:20.091 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.091 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:20.091 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.091 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.092 [2024-11-08 16:57:49.598247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.092 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.350 "name": "Existed_Raid", 00:16:20.350 "uuid": "c04f984e-ff45-4993-a795-77df02ca045f", 00:16:20.350 "strip_size_kb": 64, 00:16:20.350 "state": "online", 00:16:20.350 "raid_level": "raid5f", 00:16:20.350 "superblock": false, 00:16:20.350 "num_base_bdevs": 4, 00:16:20.350 "num_base_bdevs_discovered": 3, 00:16:20.350 "num_base_bdevs_operational": 3, 00:16:20.350 "base_bdevs_list": [ 00:16:20.350 { 00:16:20.350 "name": null, 00:16:20.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.350 "is_configured": false, 00:16:20.350 "data_offset": 0, 00:16:20.350 "data_size": 65536 00:16:20.350 }, 00:16:20.350 { 00:16:20.350 "name": "BaseBdev2", 00:16:20.350 "uuid": "9a76b50d-cb8b-454e-a842-0dbaa281d105", 00:16:20.350 "is_configured": true, 00:16:20.350 "data_offset": 0, 00:16:20.350 "data_size": 65536 00:16:20.350 }, 00:16:20.350 { 00:16:20.350 "name": "BaseBdev3", 00:16:20.350 "uuid": "a37c3a60-4deb-4cf4-a507-c1efe03f2fc8", 00:16:20.350 "is_configured": true, 00:16:20.350 "data_offset": 0, 00:16:20.350 "data_size": 65536 00:16:20.350 }, 00:16:20.350 { 00:16:20.350 "name": "BaseBdev4", 00:16:20.350 "uuid": "4c26985f-91d0-4cf8-8cb0-82e98fdcd8c7", 00:16:20.350 "is_configured": true, 00:16:20.350 "data_offset": 0, 00:16:20.350 "data_size": 65536 00:16:20.350 } 00:16:20.350 ] 00:16:20.350 }' 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.350 16:57:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.609 [2024-11-08 16:57:50.096941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.609 [2024-11-08 16:57:50.097128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.609 [2024-11-08 16:57:50.108619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.609 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.867 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 [2024-11-08 16:57:50.160594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 [2024-11-08 16:57:50.232121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:20.868 [2024-11-08 16:57:50.232238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 BaseBdev2 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 [ 00:16:20.868 { 00:16:20.868 "name": "BaseBdev2", 00:16:20.868 "aliases": [ 00:16:20.868 "cf5fd649-ce9d-4331-8cdc-41c53f731be6" 00:16:20.868 ], 00:16:20.868 "product_name": "Malloc disk", 00:16:20.868 "block_size": 512, 00:16:20.868 "num_blocks": 65536, 00:16:20.868 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:20.868 "assigned_rate_limits": { 00:16:20.868 "rw_ios_per_sec": 0, 00:16:20.868 "rw_mbytes_per_sec": 0, 00:16:20.868 "r_mbytes_per_sec": 0, 00:16:20.868 "w_mbytes_per_sec": 0 00:16:20.868 }, 00:16:20.868 "claimed": false, 00:16:20.868 "zoned": false, 00:16:20.868 "supported_io_types": { 00:16:20.868 "read": true, 00:16:20.868 "write": true, 00:16:20.868 "unmap": true, 00:16:20.868 "flush": true, 00:16:20.868 "reset": true, 00:16:20.868 "nvme_admin": false, 00:16:20.868 "nvme_io": false, 00:16:20.868 "nvme_io_md": false, 00:16:20.868 "write_zeroes": true, 00:16:20.868 "zcopy": true, 00:16:20.868 "get_zone_info": false, 00:16:20.868 "zone_management": false, 00:16:20.868 "zone_append": false, 00:16:20.868 "compare": false, 00:16:20.868 "compare_and_write": false, 00:16:20.868 "abort": true, 00:16:20.868 "seek_hole": false, 00:16:20.868 "seek_data": false, 00:16:20.868 "copy": true, 00:16:20.868 "nvme_iov_md": false 00:16:20.868 }, 00:16:20.868 "memory_domains": [ 00:16:20.868 { 00:16:20.868 "dma_device_id": "system", 00:16:20.868 "dma_device_type": 1 00:16:20.868 }, 00:16:20.868 { 00:16:20.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.868 "dma_device_type": 2 00:16:20.868 } 00:16:20.868 ], 00:16:20.868 "driver_specific": {} 00:16:20.868 } 00:16:20.868 ] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 BaseBdev3 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.868 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.127 [ 00:16:21.127 { 00:16:21.127 "name": "BaseBdev3", 00:16:21.127 "aliases": [ 00:16:21.127 "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2" 00:16:21.127 ], 00:16:21.127 "product_name": "Malloc disk", 00:16:21.127 "block_size": 512, 00:16:21.127 "num_blocks": 65536, 00:16:21.127 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:21.127 "assigned_rate_limits": { 00:16:21.127 "rw_ios_per_sec": 0, 00:16:21.127 "rw_mbytes_per_sec": 0, 00:16:21.127 "r_mbytes_per_sec": 0, 00:16:21.127 "w_mbytes_per_sec": 0 00:16:21.127 }, 00:16:21.127 "claimed": false, 00:16:21.127 "zoned": false, 00:16:21.127 "supported_io_types": { 00:16:21.127 "read": true, 00:16:21.127 "write": true, 00:16:21.127 "unmap": true, 00:16:21.127 "flush": true, 00:16:21.127 "reset": true, 00:16:21.127 "nvme_admin": false, 00:16:21.127 "nvme_io": false, 00:16:21.127 "nvme_io_md": false, 00:16:21.127 "write_zeroes": true, 00:16:21.127 "zcopy": true, 00:16:21.127 "get_zone_info": false, 00:16:21.127 "zone_management": false, 00:16:21.127 "zone_append": false, 00:16:21.127 "compare": false, 00:16:21.128 "compare_and_write": false, 00:16:21.128 "abort": true, 00:16:21.128 "seek_hole": false, 00:16:21.128 "seek_data": false, 00:16:21.128 "copy": true, 00:16:21.128 "nvme_iov_md": false 00:16:21.128 }, 00:16:21.128 "memory_domains": [ 00:16:21.128 { 00:16:21.128 "dma_device_id": "system", 00:16:21.128 "dma_device_type": 1 00:16:21.128 }, 00:16:21.128 { 00:16:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.128 "dma_device_type": 2 00:16:21.128 } 00:16:21.128 ], 00:16:21.128 "driver_specific": {} 00:16:21.128 } 00:16:21.128 ] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.128 BaseBdev4 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.128 [ 00:16:21.128 { 00:16:21.128 "name": "BaseBdev4", 00:16:21.128 "aliases": [ 00:16:21.128 "a0aef3c9-cce1-4a28-a5d8-80944a591887" 00:16:21.128 ], 00:16:21.128 "product_name": "Malloc disk", 00:16:21.128 "block_size": 512, 00:16:21.128 "num_blocks": 65536, 00:16:21.128 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:21.128 "assigned_rate_limits": { 00:16:21.128 "rw_ios_per_sec": 0, 00:16:21.128 "rw_mbytes_per_sec": 0, 00:16:21.128 "r_mbytes_per_sec": 0, 00:16:21.128 "w_mbytes_per_sec": 0 00:16:21.128 }, 00:16:21.128 "claimed": false, 00:16:21.128 "zoned": false, 00:16:21.128 "supported_io_types": { 00:16:21.128 "read": true, 00:16:21.128 "write": true, 00:16:21.128 "unmap": true, 00:16:21.128 "flush": true, 00:16:21.128 "reset": true, 00:16:21.128 "nvme_admin": false, 00:16:21.128 "nvme_io": false, 00:16:21.128 "nvme_io_md": false, 00:16:21.128 "write_zeroes": true, 00:16:21.128 "zcopy": true, 00:16:21.128 "get_zone_info": false, 00:16:21.128 "zone_management": false, 00:16:21.128 "zone_append": false, 00:16:21.128 "compare": false, 00:16:21.128 "compare_and_write": false, 00:16:21.128 "abort": true, 00:16:21.128 "seek_hole": false, 00:16:21.128 "seek_data": false, 00:16:21.128 "copy": true, 00:16:21.128 "nvme_iov_md": false 00:16:21.128 }, 00:16:21.128 "memory_domains": [ 00:16:21.128 { 00:16:21.128 "dma_device_id": "system", 00:16:21.128 "dma_device_type": 1 00:16:21.128 }, 00:16:21.128 { 00:16:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.128 "dma_device_type": 2 00:16:21.128 } 00:16:21.128 ], 00:16:21.128 "driver_specific": {} 00:16:21.128 } 00:16:21.128 ] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.128 [2024-11-08 16:57:50.475698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.128 [2024-11-08 16:57:50.475834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.128 [2024-11-08 16:57:50.475895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.128 [2024-11-08 16:57:50.478192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.128 [2024-11-08 16:57:50.478315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.128 "name": "Existed_Raid", 00:16:21.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.128 "strip_size_kb": 64, 00:16:21.128 "state": "configuring", 00:16:21.128 "raid_level": "raid5f", 00:16:21.128 "superblock": false, 00:16:21.128 "num_base_bdevs": 4, 00:16:21.128 "num_base_bdevs_discovered": 3, 00:16:21.128 "num_base_bdevs_operational": 4, 00:16:21.128 "base_bdevs_list": [ 00:16:21.128 { 00:16:21.128 "name": "BaseBdev1", 00:16:21.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.128 "is_configured": false, 00:16:21.128 "data_offset": 0, 00:16:21.128 "data_size": 0 00:16:21.128 }, 00:16:21.128 { 00:16:21.128 "name": "BaseBdev2", 00:16:21.128 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:21.128 "is_configured": true, 00:16:21.128 "data_offset": 0, 00:16:21.128 "data_size": 65536 00:16:21.128 }, 00:16:21.128 { 00:16:21.128 "name": "BaseBdev3", 00:16:21.128 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:21.128 "is_configured": true, 00:16:21.128 "data_offset": 0, 00:16:21.128 "data_size": 65536 00:16:21.128 }, 00:16:21.128 { 00:16:21.128 "name": "BaseBdev4", 00:16:21.128 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:21.128 "is_configured": true, 00:16:21.128 "data_offset": 0, 00:16:21.128 "data_size": 65536 00:16:21.128 } 00:16:21.128 ] 00:16:21.128 }' 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.128 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.697 [2024-11-08 16:57:50.963211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.697 16:57:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.697 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.697 "name": "Existed_Raid", 00:16:21.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.697 "strip_size_kb": 64, 00:16:21.697 "state": "configuring", 00:16:21.697 "raid_level": "raid5f", 00:16:21.697 "superblock": false, 00:16:21.698 "num_base_bdevs": 4, 00:16:21.698 "num_base_bdevs_discovered": 2, 00:16:21.698 "num_base_bdevs_operational": 4, 00:16:21.698 "base_bdevs_list": [ 00:16:21.698 { 00:16:21.698 "name": "BaseBdev1", 00:16:21.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.698 "is_configured": false, 00:16:21.698 "data_offset": 0, 00:16:21.698 "data_size": 0 00:16:21.698 }, 00:16:21.698 { 00:16:21.698 "name": null, 00:16:21.698 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:21.698 "is_configured": false, 00:16:21.698 "data_offset": 0, 00:16:21.698 "data_size": 65536 00:16:21.698 }, 00:16:21.698 { 00:16:21.698 "name": "BaseBdev3", 00:16:21.698 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:21.698 "is_configured": true, 00:16:21.698 "data_offset": 0, 00:16:21.698 "data_size": 65536 00:16:21.698 }, 00:16:21.698 { 00:16:21.698 "name": "BaseBdev4", 00:16:21.698 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:21.698 "is_configured": true, 00:16:21.698 "data_offset": 0, 00:16:21.698 "data_size": 65536 00:16:21.698 } 00:16:21.698 ] 00:16:21.698 }' 00:16:21.698 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.698 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.958 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.958 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.958 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.958 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.958 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.218 [2024-11-08 16:57:51.503934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.218 BaseBdev1 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.218 [ 00:16:22.218 { 00:16:22.218 "name": "BaseBdev1", 00:16:22.218 "aliases": [ 00:16:22.218 "489dc2ff-77b5-48f4-a5cd-eedad9b69e34" 00:16:22.218 ], 00:16:22.218 "product_name": "Malloc disk", 00:16:22.218 "block_size": 512, 00:16:22.218 "num_blocks": 65536, 00:16:22.218 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:22.218 "assigned_rate_limits": { 00:16:22.218 "rw_ios_per_sec": 0, 00:16:22.218 "rw_mbytes_per_sec": 0, 00:16:22.218 "r_mbytes_per_sec": 0, 00:16:22.218 "w_mbytes_per_sec": 0 00:16:22.218 }, 00:16:22.218 "claimed": true, 00:16:22.218 "claim_type": "exclusive_write", 00:16:22.218 "zoned": false, 00:16:22.218 "supported_io_types": { 00:16:22.218 "read": true, 00:16:22.218 "write": true, 00:16:22.218 "unmap": true, 00:16:22.218 "flush": true, 00:16:22.218 "reset": true, 00:16:22.218 "nvme_admin": false, 00:16:22.218 "nvme_io": false, 00:16:22.218 "nvme_io_md": false, 00:16:22.218 "write_zeroes": true, 00:16:22.218 "zcopy": true, 00:16:22.218 "get_zone_info": false, 00:16:22.218 "zone_management": false, 00:16:22.218 "zone_append": false, 00:16:22.218 "compare": false, 00:16:22.218 "compare_and_write": false, 00:16:22.218 "abort": true, 00:16:22.218 "seek_hole": false, 00:16:22.218 "seek_data": false, 00:16:22.218 "copy": true, 00:16:22.218 "nvme_iov_md": false 00:16:22.218 }, 00:16:22.218 "memory_domains": [ 00:16:22.218 { 00:16:22.218 "dma_device_id": "system", 00:16:22.218 "dma_device_type": 1 00:16:22.218 }, 00:16:22.218 { 00:16:22.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.218 "dma_device_type": 2 00:16:22.218 } 00:16:22.218 ], 00:16:22.218 "driver_specific": {} 00:16:22.218 } 00:16:22.218 ] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.218 "name": "Existed_Raid", 00:16:22.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.218 "strip_size_kb": 64, 00:16:22.218 "state": "configuring", 00:16:22.218 "raid_level": "raid5f", 00:16:22.218 "superblock": false, 00:16:22.218 "num_base_bdevs": 4, 00:16:22.218 "num_base_bdevs_discovered": 3, 00:16:22.218 "num_base_bdevs_operational": 4, 00:16:22.218 "base_bdevs_list": [ 00:16:22.218 { 00:16:22.218 "name": "BaseBdev1", 00:16:22.218 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:22.218 "is_configured": true, 00:16:22.218 "data_offset": 0, 00:16:22.218 "data_size": 65536 00:16:22.218 }, 00:16:22.218 { 00:16:22.218 "name": null, 00:16:22.218 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:22.218 "is_configured": false, 00:16:22.218 "data_offset": 0, 00:16:22.218 "data_size": 65536 00:16:22.218 }, 00:16:22.218 { 00:16:22.218 "name": "BaseBdev3", 00:16:22.218 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:22.218 "is_configured": true, 00:16:22.218 "data_offset": 0, 00:16:22.218 "data_size": 65536 00:16:22.218 }, 00:16:22.218 { 00:16:22.218 "name": "BaseBdev4", 00:16:22.218 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:22.218 "is_configured": true, 00:16:22.218 "data_offset": 0, 00:16:22.218 "data_size": 65536 00:16:22.218 } 00:16:22.218 ] 00:16:22.218 }' 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.218 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.479 [2024-11-08 16:57:51.995258] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.479 16:57:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.479 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.479 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.479 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.479 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.479 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.739 "name": "Existed_Raid", 00:16:22.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.739 "strip_size_kb": 64, 00:16:22.739 "state": "configuring", 00:16:22.739 "raid_level": "raid5f", 00:16:22.739 "superblock": false, 00:16:22.739 "num_base_bdevs": 4, 00:16:22.739 "num_base_bdevs_discovered": 2, 00:16:22.739 "num_base_bdevs_operational": 4, 00:16:22.739 "base_bdevs_list": [ 00:16:22.739 { 00:16:22.739 "name": "BaseBdev1", 00:16:22.739 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:22.739 "is_configured": true, 00:16:22.739 "data_offset": 0, 00:16:22.739 "data_size": 65536 00:16:22.739 }, 00:16:22.739 { 00:16:22.739 "name": null, 00:16:22.739 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:22.739 "is_configured": false, 00:16:22.739 "data_offset": 0, 00:16:22.739 "data_size": 65536 00:16:22.739 }, 00:16:22.739 { 00:16:22.739 "name": null, 00:16:22.739 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:22.739 "is_configured": false, 00:16:22.739 "data_offset": 0, 00:16:22.739 "data_size": 65536 00:16:22.739 }, 00:16:22.739 { 00:16:22.739 "name": "BaseBdev4", 00:16:22.739 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:22.739 "is_configured": true, 00:16:22.739 "data_offset": 0, 00:16:22.739 "data_size": 65536 00:16:22.739 } 00:16:22.739 ] 00:16:22.739 }' 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.739 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.999 [2024-11-08 16:57:52.502477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.999 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.261 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.261 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.261 "name": "Existed_Raid", 00:16:23.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.261 "strip_size_kb": 64, 00:16:23.261 "state": "configuring", 00:16:23.261 "raid_level": "raid5f", 00:16:23.261 "superblock": false, 00:16:23.261 "num_base_bdevs": 4, 00:16:23.261 "num_base_bdevs_discovered": 3, 00:16:23.261 "num_base_bdevs_operational": 4, 00:16:23.261 "base_bdevs_list": [ 00:16:23.261 { 00:16:23.261 "name": "BaseBdev1", 00:16:23.261 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:23.261 "is_configured": true, 00:16:23.261 "data_offset": 0, 00:16:23.261 "data_size": 65536 00:16:23.261 }, 00:16:23.261 { 00:16:23.261 "name": null, 00:16:23.261 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:23.261 "is_configured": false, 00:16:23.261 "data_offset": 0, 00:16:23.261 "data_size": 65536 00:16:23.261 }, 00:16:23.261 { 00:16:23.261 "name": "BaseBdev3", 00:16:23.261 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:23.261 "is_configured": true, 00:16:23.261 "data_offset": 0, 00:16:23.261 "data_size": 65536 00:16:23.261 }, 00:16:23.261 { 00:16:23.261 "name": "BaseBdev4", 00:16:23.261 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:23.261 "is_configured": true, 00:16:23.261 "data_offset": 0, 00:16:23.261 "data_size": 65536 00:16:23.261 } 00:16:23.261 ] 00:16:23.261 }' 00:16:23.261 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.261 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.521 16:57:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 [2024-11-08 16:57:52.993672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.781 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.781 "name": "Existed_Raid", 00:16:23.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.781 "strip_size_kb": 64, 00:16:23.781 "state": "configuring", 00:16:23.781 "raid_level": "raid5f", 00:16:23.781 "superblock": false, 00:16:23.781 "num_base_bdevs": 4, 00:16:23.781 "num_base_bdevs_discovered": 2, 00:16:23.781 "num_base_bdevs_operational": 4, 00:16:23.781 "base_bdevs_list": [ 00:16:23.781 { 00:16:23.781 "name": null, 00:16:23.781 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:23.781 "is_configured": false, 00:16:23.781 "data_offset": 0, 00:16:23.781 "data_size": 65536 00:16:23.781 }, 00:16:23.781 { 00:16:23.781 "name": null, 00:16:23.781 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:23.781 "is_configured": false, 00:16:23.781 "data_offset": 0, 00:16:23.781 "data_size": 65536 00:16:23.781 }, 00:16:23.781 { 00:16:23.781 "name": "BaseBdev3", 00:16:23.781 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:23.781 "is_configured": true, 00:16:23.781 "data_offset": 0, 00:16:23.781 "data_size": 65536 00:16:23.781 }, 00:16:23.781 { 00:16:23.781 "name": "BaseBdev4", 00:16:23.781 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:23.781 "is_configured": true, 00:16:23.781 "data_offset": 0, 00:16:23.781 "data_size": 65536 00:16:23.781 } 00:16:23.781 ] 00:16:23.781 }' 00:16:23.781 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.781 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.044 [2024-11-08 16:57:53.528037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.044 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.045 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.306 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.306 "name": "Existed_Raid", 00:16:24.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.306 "strip_size_kb": 64, 00:16:24.306 "state": "configuring", 00:16:24.306 "raid_level": "raid5f", 00:16:24.306 "superblock": false, 00:16:24.306 "num_base_bdevs": 4, 00:16:24.306 "num_base_bdevs_discovered": 3, 00:16:24.306 "num_base_bdevs_operational": 4, 00:16:24.306 "base_bdevs_list": [ 00:16:24.306 { 00:16:24.306 "name": null, 00:16:24.306 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:24.306 "is_configured": false, 00:16:24.306 "data_offset": 0, 00:16:24.306 "data_size": 65536 00:16:24.306 }, 00:16:24.306 { 00:16:24.306 "name": "BaseBdev2", 00:16:24.306 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:24.306 "is_configured": true, 00:16:24.306 "data_offset": 0, 00:16:24.306 "data_size": 65536 00:16:24.306 }, 00:16:24.306 { 00:16:24.306 "name": "BaseBdev3", 00:16:24.306 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:24.306 "is_configured": true, 00:16:24.306 "data_offset": 0, 00:16:24.306 "data_size": 65536 00:16:24.306 }, 00:16:24.306 { 00:16:24.306 "name": "BaseBdev4", 00:16:24.306 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:24.306 "is_configured": true, 00:16:24.306 "data_offset": 0, 00:16:24.306 "data_size": 65536 00:16:24.306 } 00:16:24.306 ] 00:16:24.306 }' 00:16:24.306 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.306 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.566 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.566 16:57:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.566 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.566 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.566 16:57:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 489dc2ff-77b5-48f4-a5cd-eedad9b69e34 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.566 [2024-11-08 16:57:54.074847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:24.566 [2024-11-08 16:57:54.074913] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:24.566 [2024-11-08 16:57:54.074923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:24.566 [2024-11-08 16:57:54.075215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:24.566 [2024-11-08 16:57:54.075797] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:24.566 [2024-11-08 16:57:54.075826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:16:24.566 [2024-11-08 16:57:54.076044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.566 NewBaseBdev 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.566 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 [ 00:16:24.826 { 00:16:24.826 "name": "NewBaseBdev", 00:16:24.826 "aliases": [ 00:16:24.826 "489dc2ff-77b5-48f4-a5cd-eedad9b69e34" 00:16:24.826 ], 00:16:24.826 "product_name": "Malloc disk", 00:16:24.826 "block_size": 512, 00:16:24.826 "num_blocks": 65536, 00:16:24.826 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:24.826 "assigned_rate_limits": { 00:16:24.826 "rw_ios_per_sec": 0, 00:16:24.826 "rw_mbytes_per_sec": 0, 00:16:24.826 "r_mbytes_per_sec": 0, 00:16:24.826 "w_mbytes_per_sec": 0 00:16:24.826 }, 00:16:24.826 "claimed": true, 00:16:24.826 "claim_type": "exclusive_write", 00:16:24.826 "zoned": false, 00:16:24.826 "supported_io_types": { 00:16:24.826 "read": true, 00:16:24.826 "write": true, 00:16:24.826 "unmap": true, 00:16:24.826 "flush": true, 00:16:24.826 "reset": true, 00:16:24.826 "nvme_admin": false, 00:16:24.826 "nvme_io": false, 00:16:24.826 "nvme_io_md": false, 00:16:24.826 "write_zeroes": true, 00:16:24.826 "zcopy": true, 00:16:24.826 "get_zone_info": false, 00:16:24.826 "zone_management": false, 00:16:24.826 "zone_append": false, 00:16:24.826 "compare": false, 00:16:24.826 "compare_and_write": false, 00:16:24.826 "abort": true, 00:16:24.826 "seek_hole": false, 00:16:24.826 "seek_data": false, 00:16:24.826 "copy": true, 00:16:24.826 "nvme_iov_md": false 00:16:24.826 }, 00:16:24.826 "memory_domains": [ 00:16:24.826 { 00:16:24.826 "dma_device_id": "system", 00:16:24.826 "dma_device_type": 1 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.826 "dma_device_type": 2 00:16:24.826 } 00:16:24.826 ], 00:16:24.826 "driver_specific": {} 00:16:24.826 } 00:16:24.826 ] 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.826 "name": "Existed_Raid", 00:16:24.826 "uuid": "db42ec4f-ffd2-4630-8f0b-73403f942186", 00:16:24.826 "strip_size_kb": 64, 00:16:24.826 "state": "online", 00:16:24.826 "raid_level": "raid5f", 00:16:24.826 "superblock": false, 00:16:24.826 "num_base_bdevs": 4, 00:16:24.826 "num_base_bdevs_discovered": 4, 00:16:24.826 "num_base_bdevs_operational": 4, 00:16:24.826 "base_bdevs_list": [ 00:16:24.826 { 00:16:24.826 "name": "NewBaseBdev", 00:16:24.826 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 0, 00:16:24.826 "data_size": 65536 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "name": "BaseBdev2", 00:16:24.826 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 0, 00:16:24.826 "data_size": 65536 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "name": "BaseBdev3", 00:16:24.826 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 0, 00:16:24.826 "data_size": 65536 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "name": "BaseBdev4", 00:16:24.826 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 0, 00:16:24.826 "data_size": 65536 00:16:24.826 } 00:16:24.826 ] 00:16:24.826 }' 00:16:24.826 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.827 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.086 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.346 [2024-11-08 16:57:54.614294] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.346 "name": "Existed_Raid", 00:16:25.346 "aliases": [ 00:16:25.346 "db42ec4f-ffd2-4630-8f0b-73403f942186" 00:16:25.346 ], 00:16:25.346 "product_name": "Raid Volume", 00:16:25.346 "block_size": 512, 00:16:25.346 "num_blocks": 196608, 00:16:25.346 "uuid": "db42ec4f-ffd2-4630-8f0b-73403f942186", 00:16:25.346 "assigned_rate_limits": { 00:16:25.346 "rw_ios_per_sec": 0, 00:16:25.346 "rw_mbytes_per_sec": 0, 00:16:25.346 "r_mbytes_per_sec": 0, 00:16:25.346 "w_mbytes_per_sec": 0 00:16:25.346 }, 00:16:25.346 "claimed": false, 00:16:25.346 "zoned": false, 00:16:25.346 "supported_io_types": { 00:16:25.346 "read": true, 00:16:25.346 "write": true, 00:16:25.346 "unmap": false, 00:16:25.346 "flush": false, 00:16:25.346 "reset": true, 00:16:25.346 "nvme_admin": false, 00:16:25.346 "nvme_io": false, 00:16:25.346 "nvme_io_md": false, 00:16:25.346 "write_zeroes": true, 00:16:25.346 "zcopy": false, 00:16:25.346 "get_zone_info": false, 00:16:25.346 "zone_management": false, 00:16:25.346 "zone_append": false, 00:16:25.346 "compare": false, 00:16:25.346 "compare_and_write": false, 00:16:25.346 "abort": false, 00:16:25.346 "seek_hole": false, 00:16:25.346 "seek_data": false, 00:16:25.346 "copy": false, 00:16:25.346 "nvme_iov_md": false 00:16:25.346 }, 00:16:25.346 "driver_specific": { 00:16:25.346 "raid": { 00:16:25.346 "uuid": "db42ec4f-ffd2-4630-8f0b-73403f942186", 00:16:25.346 "strip_size_kb": 64, 00:16:25.346 "state": "online", 00:16:25.346 "raid_level": "raid5f", 00:16:25.346 "superblock": false, 00:16:25.346 "num_base_bdevs": 4, 00:16:25.346 "num_base_bdevs_discovered": 4, 00:16:25.346 "num_base_bdevs_operational": 4, 00:16:25.346 "base_bdevs_list": [ 00:16:25.346 { 00:16:25.346 "name": "NewBaseBdev", 00:16:25.346 "uuid": "489dc2ff-77b5-48f4-a5cd-eedad9b69e34", 00:16:25.346 "is_configured": true, 00:16:25.346 "data_offset": 0, 00:16:25.346 "data_size": 65536 00:16:25.346 }, 00:16:25.346 { 00:16:25.346 "name": "BaseBdev2", 00:16:25.346 "uuid": "cf5fd649-ce9d-4331-8cdc-41c53f731be6", 00:16:25.346 "is_configured": true, 00:16:25.346 "data_offset": 0, 00:16:25.346 "data_size": 65536 00:16:25.346 }, 00:16:25.346 { 00:16:25.346 "name": "BaseBdev3", 00:16:25.346 "uuid": "521e7b6e-7301-4203-b2f8-a9f13a1ecbf2", 00:16:25.346 "is_configured": true, 00:16:25.346 "data_offset": 0, 00:16:25.346 "data_size": 65536 00:16:25.346 }, 00:16:25.346 { 00:16:25.346 "name": "BaseBdev4", 00:16:25.346 "uuid": "a0aef3c9-cce1-4a28-a5d8-80944a591887", 00:16:25.346 "is_configured": true, 00:16:25.346 "data_offset": 0, 00:16:25.346 "data_size": 65536 00:16:25.346 } 00:16:25.346 ] 00:16:25.346 } 00:16:25.346 } 00:16:25.346 }' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:25.346 BaseBdev2 00:16:25.346 BaseBdev3 00:16:25.346 BaseBdev4' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.346 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.606 [2024-11-08 16:57:54.945488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.606 [2024-11-08 16:57:54.945530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.606 [2024-11-08 16:57:54.945629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.606 [2024-11-08 16:57:54.945919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.606 [2024-11-08 16:57:54.945936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93346 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93346 ']' 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93346 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93346 00:16:25.606 killing process with pid 93346 00:16:25.606 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.607 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.607 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93346' 00:16:25.607 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93346 00:16:25.607 [2024-11-08 16:57:54.995667] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.607 16:57:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93346 00:16:25.607 [2024-11-08 16:57:55.037594] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:25.867 00:16:25.867 real 0m9.985s 00:16:25.867 user 0m17.090s 00:16:25.867 sys 0m2.064s 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.867 ************************************ 00:16:25.867 END TEST raid5f_state_function_test 00:16:25.867 ************************************ 00:16:25.867 16:57:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:25.867 16:57:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:25.867 16:57:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.867 16:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.867 ************************************ 00:16:25.867 START TEST raid5f_state_function_test_sb 00:16:25.867 ************************************ 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94001 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:25.867 Process raid pid: 94001 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94001' 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94001 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94001 ']' 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.867 16:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.128 [2024-11-08 16:57:55.435686] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:26.128 [2024-11-08 16:57:55.435824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.128 [2024-11-08 16:57:55.598157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.128 [2024-11-08 16:57:55.649281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.387 [2024-11-08 16:57:55.693678] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.387 [2024-11-08 16:57:55.693723] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.959 [2024-11-08 16:57:56.280023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.959 [2024-11-08 16:57:56.280082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.959 [2024-11-08 16:57:56.280096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.959 [2024-11-08 16:57:56.280108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.959 [2024-11-08 16:57:56.280115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.959 [2024-11-08 16:57:56.280129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.959 [2024-11-08 16:57:56.280136] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:26.959 [2024-11-08 16:57:56.280147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.959 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.960 "name": "Existed_Raid", 00:16:26.960 "uuid": "423727b3-392f-4e2f-a196-3f6ddc76faf6", 00:16:26.960 "strip_size_kb": 64, 00:16:26.960 "state": "configuring", 00:16:26.960 "raid_level": "raid5f", 00:16:26.960 "superblock": true, 00:16:26.960 "num_base_bdevs": 4, 00:16:26.960 "num_base_bdevs_discovered": 0, 00:16:26.960 "num_base_bdevs_operational": 4, 00:16:26.960 "base_bdevs_list": [ 00:16:26.960 { 00:16:26.960 "name": "BaseBdev1", 00:16:26.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.960 "is_configured": false, 00:16:26.960 "data_offset": 0, 00:16:26.960 "data_size": 0 00:16:26.960 }, 00:16:26.960 { 00:16:26.960 "name": "BaseBdev2", 00:16:26.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.960 "is_configured": false, 00:16:26.960 "data_offset": 0, 00:16:26.960 "data_size": 0 00:16:26.960 }, 00:16:26.960 { 00:16:26.960 "name": "BaseBdev3", 00:16:26.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.960 "is_configured": false, 00:16:26.960 "data_offset": 0, 00:16:26.960 "data_size": 0 00:16:26.960 }, 00:16:26.960 { 00:16:26.960 "name": "BaseBdev4", 00:16:26.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.960 "is_configured": false, 00:16:26.960 "data_offset": 0, 00:16:26.960 "data_size": 0 00:16:26.960 } 00:16:26.960 ] 00:16:26.960 }' 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.960 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 [2024-11-08 16:57:56.763095] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.530 [2024-11-08 16:57:56.763150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 [2024-11-08 16:57:56.775199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.530 [2024-11-08 16:57:56.775265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.530 [2024-11-08 16:57:56.775275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.530 [2024-11-08 16:57:56.775286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.530 [2024-11-08 16:57:56.775293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.530 [2024-11-08 16:57:56.775303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.530 [2024-11-08 16:57:56.775310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:27.530 [2024-11-08 16:57:56.775321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 [2024-11-08 16:57:56.796584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.530 BaseBdev1 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 [ 00:16:27.530 { 00:16:27.530 "name": "BaseBdev1", 00:16:27.530 "aliases": [ 00:16:27.530 "88b074af-290c-4bc8-b5f4-106df6abe9ab" 00:16:27.530 ], 00:16:27.530 "product_name": "Malloc disk", 00:16:27.530 "block_size": 512, 00:16:27.530 "num_blocks": 65536, 00:16:27.530 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:27.530 "assigned_rate_limits": { 00:16:27.530 "rw_ios_per_sec": 0, 00:16:27.530 "rw_mbytes_per_sec": 0, 00:16:27.530 "r_mbytes_per_sec": 0, 00:16:27.530 "w_mbytes_per_sec": 0 00:16:27.530 }, 00:16:27.530 "claimed": true, 00:16:27.530 "claim_type": "exclusive_write", 00:16:27.530 "zoned": false, 00:16:27.530 "supported_io_types": { 00:16:27.530 "read": true, 00:16:27.530 "write": true, 00:16:27.530 "unmap": true, 00:16:27.530 "flush": true, 00:16:27.530 "reset": true, 00:16:27.530 "nvme_admin": false, 00:16:27.530 "nvme_io": false, 00:16:27.530 "nvme_io_md": false, 00:16:27.530 "write_zeroes": true, 00:16:27.530 "zcopy": true, 00:16:27.530 "get_zone_info": false, 00:16:27.530 "zone_management": false, 00:16:27.530 "zone_append": false, 00:16:27.530 "compare": false, 00:16:27.530 "compare_and_write": false, 00:16:27.530 "abort": true, 00:16:27.530 "seek_hole": false, 00:16:27.530 "seek_data": false, 00:16:27.530 "copy": true, 00:16:27.530 "nvme_iov_md": false 00:16:27.530 }, 00:16:27.530 "memory_domains": [ 00:16:27.530 { 00:16:27.530 "dma_device_id": "system", 00:16:27.530 "dma_device_type": 1 00:16:27.530 }, 00:16:27.530 { 00:16:27.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.530 "dma_device_type": 2 00:16:27.530 } 00:16:27.530 ], 00:16:27.530 "driver_specific": {} 00:16:27.530 } 00:16:27.530 ] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.530 "name": "Existed_Raid", 00:16:27.530 "uuid": "47ac947f-8666-492d-9355-026372a8ddc9", 00:16:27.530 "strip_size_kb": 64, 00:16:27.530 "state": "configuring", 00:16:27.530 "raid_level": "raid5f", 00:16:27.530 "superblock": true, 00:16:27.530 "num_base_bdevs": 4, 00:16:27.530 "num_base_bdevs_discovered": 1, 00:16:27.530 "num_base_bdevs_operational": 4, 00:16:27.530 "base_bdevs_list": [ 00:16:27.530 { 00:16:27.530 "name": "BaseBdev1", 00:16:27.530 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:27.530 "is_configured": true, 00:16:27.530 "data_offset": 2048, 00:16:27.530 "data_size": 63488 00:16:27.530 }, 00:16:27.530 { 00:16:27.530 "name": "BaseBdev2", 00:16:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.530 "is_configured": false, 00:16:27.530 "data_offset": 0, 00:16:27.530 "data_size": 0 00:16:27.530 }, 00:16:27.530 { 00:16:27.530 "name": "BaseBdev3", 00:16:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.530 "is_configured": false, 00:16:27.530 "data_offset": 0, 00:16:27.530 "data_size": 0 00:16:27.530 }, 00:16:27.530 { 00:16:27.530 "name": "BaseBdev4", 00:16:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.530 "is_configured": false, 00:16:27.530 "data_offset": 0, 00:16:27.530 "data_size": 0 00:16:27.530 } 00:16:27.530 ] 00:16:27.530 }' 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.530 16:57:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.099 [2024-11-08 16:57:57.359821] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.099 [2024-11-08 16:57:57.359899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.099 [2024-11-08 16:57:57.371932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.099 [2024-11-08 16:57:57.374254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.099 [2024-11-08 16:57:57.374323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.099 [2024-11-08 16:57:57.374334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.099 [2024-11-08 16:57:57.374344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.099 [2024-11-08 16:57:57.374352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:28.099 [2024-11-08 16:57:57.374361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.099 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.099 "name": "Existed_Raid", 00:16:28.099 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:28.099 "strip_size_kb": 64, 00:16:28.099 "state": "configuring", 00:16:28.099 "raid_level": "raid5f", 00:16:28.099 "superblock": true, 00:16:28.099 "num_base_bdevs": 4, 00:16:28.099 "num_base_bdevs_discovered": 1, 00:16:28.099 "num_base_bdevs_operational": 4, 00:16:28.099 "base_bdevs_list": [ 00:16:28.099 { 00:16:28.099 "name": "BaseBdev1", 00:16:28.099 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:28.099 "is_configured": true, 00:16:28.099 "data_offset": 2048, 00:16:28.099 "data_size": 63488 00:16:28.099 }, 00:16:28.099 { 00:16:28.099 "name": "BaseBdev2", 00:16:28.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.099 "is_configured": false, 00:16:28.099 "data_offset": 0, 00:16:28.099 "data_size": 0 00:16:28.099 }, 00:16:28.099 { 00:16:28.099 "name": "BaseBdev3", 00:16:28.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.100 "is_configured": false, 00:16:28.100 "data_offset": 0, 00:16:28.100 "data_size": 0 00:16:28.100 }, 00:16:28.100 { 00:16:28.100 "name": "BaseBdev4", 00:16:28.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.100 "is_configured": false, 00:16:28.100 "data_offset": 0, 00:16:28.100 "data_size": 0 00:16:28.100 } 00:16:28.100 ] 00:16:28.100 }' 00:16:28.100 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.100 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.359 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.359 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.359 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.618 [2024-11-08 16:57:57.903521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.618 BaseBdev2 00:16:28.618 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.618 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.619 [ 00:16:28.619 { 00:16:28.619 "name": "BaseBdev2", 00:16:28.619 "aliases": [ 00:16:28.619 "509e60ac-10be-434c-b612-af549d91ab9c" 00:16:28.619 ], 00:16:28.619 "product_name": "Malloc disk", 00:16:28.619 "block_size": 512, 00:16:28.619 "num_blocks": 65536, 00:16:28.619 "uuid": "509e60ac-10be-434c-b612-af549d91ab9c", 00:16:28.619 "assigned_rate_limits": { 00:16:28.619 "rw_ios_per_sec": 0, 00:16:28.619 "rw_mbytes_per_sec": 0, 00:16:28.619 "r_mbytes_per_sec": 0, 00:16:28.619 "w_mbytes_per_sec": 0 00:16:28.619 }, 00:16:28.619 "claimed": true, 00:16:28.619 "claim_type": "exclusive_write", 00:16:28.619 "zoned": false, 00:16:28.619 "supported_io_types": { 00:16:28.619 "read": true, 00:16:28.619 "write": true, 00:16:28.619 "unmap": true, 00:16:28.619 "flush": true, 00:16:28.619 "reset": true, 00:16:28.619 "nvme_admin": false, 00:16:28.619 "nvme_io": false, 00:16:28.619 "nvme_io_md": false, 00:16:28.619 "write_zeroes": true, 00:16:28.619 "zcopy": true, 00:16:28.619 "get_zone_info": false, 00:16:28.619 "zone_management": false, 00:16:28.619 "zone_append": false, 00:16:28.619 "compare": false, 00:16:28.619 "compare_and_write": false, 00:16:28.619 "abort": true, 00:16:28.619 "seek_hole": false, 00:16:28.619 "seek_data": false, 00:16:28.619 "copy": true, 00:16:28.619 "nvme_iov_md": false 00:16:28.619 }, 00:16:28.619 "memory_domains": [ 00:16:28.619 { 00:16:28.619 "dma_device_id": "system", 00:16:28.619 "dma_device_type": 1 00:16:28.619 }, 00:16:28.619 { 00:16:28.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.619 "dma_device_type": 2 00:16:28.619 } 00:16:28.619 ], 00:16:28.619 "driver_specific": {} 00:16:28.619 } 00:16:28.619 ] 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.619 "name": "Existed_Raid", 00:16:28.619 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:28.619 "strip_size_kb": 64, 00:16:28.619 "state": "configuring", 00:16:28.619 "raid_level": "raid5f", 00:16:28.619 "superblock": true, 00:16:28.619 "num_base_bdevs": 4, 00:16:28.619 "num_base_bdevs_discovered": 2, 00:16:28.619 "num_base_bdevs_operational": 4, 00:16:28.619 "base_bdevs_list": [ 00:16:28.619 { 00:16:28.619 "name": "BaseBdev1", 00:16:28.619 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:28.619 "is_configured": true, 00:16:28.619 "data_offset": 2048, 00:16:28.619 "data_size": 63488 00:16:28.619 }, 00:16:28.619 { 00:16:28.619 "name": "BaseBdev2", 00:16:28.619 "uuid": "509e60ac-10be-434c-b612-af549d91ab9c", 00:16:28.619 "is_configured": true, 00:16:28.619 "data_offset": 2048, 00:16:28.619 "data_size": 63488 00:16:28.619 }, 00:16:28.619 { 00:16:28.619 "name": "BaseBdev3", 00:16:28.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.619 "is_configured": false, 00:16:28.619 "data_offset": 0, 00:16:28.619 "data_size": 0 00:16:28.619 }, 00:16:28.619 { 00:16:28.619 "name": "BaseBdev4", 00:16:28.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.619 "is_configured": false, 00:16:28.619 "data_offset": 0, 00:16:28.619 "data_size": 0 00:16:28.619 } 00:16:28.619 ] 00:16:28.619 }' 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.619 16:57:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.189 [2024-11-08 16:57:58.426920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.189 BaseBdev3 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.189 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.189 [ 00:16:29.189 { 00:16:29.189 "name": "BaseBdev3", 00:16:29.189 "aliases": [ 00:16:29.189 "4dbca618-18bc-4579-b582-64ce3ce416d4" 00:16:29.189 ], 00:16:29.189 "product_name": "Malloc disk", 00:16:29.189 "block_size": 512, 00:16:29.189 "num_blocks": 65536, 00:16:29.189 "uuid": "4dbca618-18bc-4579-b582-64ce3ce416d4", 00:16:29.189 "assigned_rate_limits": { 00:16:29.189 "rw_ios_per_sec": 0, 00:16:29.189 "rw_mbytes_per_sec": 0, 00:16:29.189 "r_mbytes_per_sec": 0, 00:16:29.189 "w_mbytes_per_sec": 0 00:16:29.189 }, 00:16:29.189 "claimed": true, 00:16:29.189 "claim_type": "exclusive_write", 00:16:29.189 "zoned": false, 00:16:29.189 "supported_io_types": { 00:16:29.189 "read": true, 00:16:29.189 "write": true, 00:16:29.189 "unmap": true, 00:16:29.189 "flush": true, 00:16:29.189 "reset": true, 00:16:29.189 "nvme_admin": false, 00:16:29.189 "nvme_io": false, 00:16:29.189 "nvme_io_md": false, 00:16:29.189 "write_zeroes": true, 00:16:29.190 "zcopy": true, 00:16:29.190 "get_zone_info": false, 00:16:29.190 "zone_management": false, 00:16:29.190 "zone_append": false, 00:16:29.190 "compare": false, 00:16:29.190 "compare_and_write": false, 00:16:29.190 "abort": true, 00:16:29.190 "seek_hole": false, 00:16:29.190 "seek_data": false, 00:16:29.190 "copy": true, 00:16:29.190 "nvme_iov_md": false 00:16:29.190 }, 00:16:29.190 "memory_domains": [ 00:16:29.190 { 00:16:29.190 "dma_device_id": "system", 00:16:29.190 "dma_device_type": 1 00:16:29.190 }, 00:16:29.190 { 00:16:29.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.190 "dma_device_type": 2 00:16:29.190 } 00:16:29.190 ], 00:16:29.190 "driver_specific": {} 00:16:29.190 } 00:16:29.190 ] 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.190 "name": "Existed_Raid", 00:16:29.190 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:29.190 "strip_size_kb": 64, 00:16:29.190 "state": "configuring", 00:16:29.190 "raid_level": "raid5f", 00:16:29.190 "superblock": true, 00:16:29.190 "num_base_bdevs": 4, 00:16:29.190 "num_base_bdevs_discovered": 3, 00:16:29.190 "num_base_bdevs_operational": 4, 00:16:29.190 "base_bdevs_list": [ 00:16:29.190 { 00:16:29.190 "name": "BaseBdev1", 00:16:29.190 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:29.190 "is_configured": true, 00:16:29.190 "data_offset": 2048, 00:16:29.190 "data_size": 63488 00:16:29.190 }, 00:16:29.190 { 00:16:29.190 "name": "BaseBdev2", 00:16:29.190 "uuid": "509e60ac-10be-434c-b612-af549d91ab9c", 00:16:29.190 "is_configured": true, 00:16:29.190 "data_offset": 2048, 00:16:29.190 "data_size": 63488 00:16:29.190 }, 00:16:29.190 { 00:16:29.190 "name": "BaseBdev3", 00:16:29.190 "uuid": "4dbca618-18bc-4579-b582-64ce3ce416d4", 00:16:29.190 "is_configured": true, 00:16:29.190 "data_offset": 2048, 00:16:29.190 "data_size": 63488 00:16:29.190 }, 00:16:29.190 { 00:16:29.190 "name": "BaseBdev4", 00:16:29.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.190 "is_configured": false, 00:16:29.190 "data_offset": 0, 00:16:29.190 "data_size": 0 00:16:29.190 } 00:16:29.190 ] 00:16:29.190 }' 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.190 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.449 [2024-11-08 16:57:58.925881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.449 [2024-11-08 16:57:58.926297] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:29.449 BaseBdev4 00:16:29.449 [2024-11-08 16:57:58.926360] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:29.449 [2024-11-08 16:57:58.926712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:29.449 [2024-11-08 16:57:58.927287] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:29.449 [2024-11-08 16:57:58.927305] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:29.449 [2024-11-08 16:57:58.927457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.449 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.449 [ 00:16:29.449 { 00:16:29.449 "name": "BaseBdev4", 00:16:29.449 "aliases": [ 00:16:29.449 "cf975cbb-098e-41e0-864d-043d52c296ab" 00:16:29.449 ], 00:16:29.450 "product_name": "Malloc disk", 00:16:29.450 "block_size": 512, 00:16:29.450 "num_blocks": 65536, 00:16:29.450 "uuid": "cf975cbb-098e-41e0-864d-043d52c296ab", 00:16:29.450 "assigned_rate_limits": { 00:16:29.450 "rw_ios_per_sec": 0, 00:16:29.450 "rw_mbytes_per_sec": 0, 00:16:29.450 "r_mbytes_per_sec": 0, 00:16:29.450 "w_mbytes_per_sec": 0 00:16:29.450 }, 00:16:29.450 "claimed": true, 00:16:29.450 "claim_type": "exclusive_write", 00:16:29.450 "zoned": false, 00:16:29.450 "supported_io_types": { 00:16:29.450 "read": true, 00:16:29.450 "write": true, 00:16:29.450 "unmap": true, 00:16:29.450 "flush": true, 00:16:29.450 "reset": true, 00:16:29.450 "nvme_admin": false, 00:16:29.450 "nvme_io": false, 00:16:29.450 "nvme_io_md": false, 00:16:29.450 "write_zeroes": true, 00:16:29.450 "zcopy": true, 00:16:29.450 "get_zone_info": false, 00:16:29.450 "zone_management": false, 00:16:29.450 "zone_append": false, 00:16:29.450 "compare": false, 00:16:29.450 "compare_and_write": false, 00:16:29.450 "abort": true, 00:16:29.450 "seek_hole": false, 00:16:29.450 "seek_data": false, 00:16:29.450 "copy": true, 00:16:29.450 "nvme_iov_md": false 00:16:29.450 }, 00:16:29.450 "memory_domains": [ 00:16:29.450 { 00:16:29.450 "dma_device_id": "system", 00:16:29.450 "dma_device_type": 1 00:16:29.450 }, 00:16:29.450 { 00:16:29.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.450 "dma_device_type": 2 00:16:29.450 } 00:16:29.450 ], 00:16:29.450 "driver_specific": {} 00:16:29.450 } 00:16:29.450 ] 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.450 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.710 16:57:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.710 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.710 "name": "Existed_Raid", 00:16:29.710 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:29.710 "strip_size_kb": 64, 00:16:29.710 "state": "online", 00:16:29.710 "raid_level": "raid5f", 00:16:29.710 "superblock": true, 00:16:29.710 "num_base_bdevs": 4, 00:16:29.710 "num_base_bdevs_discovered": 4, 00:16:29.710 "num_base_bdevs_operational": 4, 00:16:29.710 "base_bdevs_list": [ 00:16:29.710 { 00:16:29.710 "name": "BaseBdev1", 00:16:29.710 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:29.710 "is_configured": true, 00:16:29.710 "data_offset": 2048, 00:16:29.710 "data_size": 63488 00:16:29.710 }, 00:16:29.710 { 00:16:29.710 "name": "BaseBdev2", 00:16:29.710 "uuid": "509e60ac-10be-434c-b612-af549d91ab9c", 00:16:29.710 "is_configured": true, 00:16:29.710 "data_offset": 2048, 00:16:29.710 "data_size": 63488 00:16:29.710 }, 00:16:29.710 { 00:16:29.710 "name": "BaseBdev3", 00:16:29.710 "uuid": "4dbca618-18bc-4579-b582-64ce3ce416d4", 00:16:29.710 "is_configured": true, 00:16:29.710 "data_offset": 2048, 00:16:29.710 "data_size": 63488 00:16:29.710 }, 00:16:29.710 { 00:16:29.710 "name": "BaseBdev4", 00:16:29.710 "uuid": "cf975cbb-098e-41e0-864d-043d52c296ab", 00:16:29.710 "is_configured": true, 00:16:29.710 "data_offset": 2048, 00:16:29.710 "data_size": 63488 00:16:29.710 } 00:16:29.710 ] 00:16:29.710 }' 00:16:29.710 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.710 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.970 [2024-11-08 16:57:59.441424] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.970 "name": "Existed_Raid", 00:16:29.970 "aliases": [ 00:16:29.970 "315a7944-52c4-4ecb-b714-1d5f87be5ee3" 00:16:29.970 ], 00:16:29.970 "product_name": "Raid Volume", 00:16:29.970 "block_size": 512, 00:16:29.970 "num_blocks": 190464, 00:16:29.970 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:29.970 "assigned_rate_limits": { 00:16:29.970 "rw_ios_per_sec": 0, 00:16:29.970 "rw_mbytes_per_sec": 0, 00:16:29.970 "r_mbytes_per_sec": 0, 00:16:29.970 "w_mbytes_per_sec": 0 00:16:29.970 }, 00:16:29.970 "claimed": false, 00:16:29.970 "zoned": false, 00:16:29.970 "supported_io_types": { 00:16:29.970 "read": true, 00:16:29.970 "write": true, 00:16:29.970 "unmap": false, 00:16:29.970 "flush": false, 00:16:29.970 "reset": true, 00:16:29.970 "nvme_admin": false, 00:16:29.970 "nvme_io": false, 00:16:29.970 "nvme_io_md": false, 00:16:29.970 "write_zeroes": true, 00:16:29.970 "zcopy": false, 00:16:29.970 "get_zone_info": false, 00:16:29.970 "zone_management": false, 00:16:29.970 "zone_append": false, 00:16:29.970 "compare": false, 00:16:29.970 "compare_and_write": false, 00:16:29.970 "abort": false, 00:16:29.970 "seek_hole": false, 00:16:29.970 "seek_data": false, 00:16:29.970 "copy": false, 00:16:29.970 "nvme_iov_md": false 00:16:29.970 }, 00:16:29.970 "driver_specific": { 00:16:29.970 "raid": { 00:16:29.970 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:29.970 "strip_size_kb": 64, 00:16:29.970 "state": "online", 00:16:29.970 "raid_level": "raid5f", 00:16:29.970 "superblock": true, 00:16:29.970 "num_base_bdevs": 4, 00:16:29.970 "num_base_bdevs_discovered": 4, 00:16:29.970 "num_base_bdevs_operational": 4, 00:16:29.970 "base_bdevs_list": [ 00:16:29.970 { 00:16:29.970 "name": "BaseBdev1", 00:16:29.970 "uuid": "88b074af-290c-4bc8-b5f4-106df6abe9ab", 00:16:29.970 "is_configured": true, 00:16:29.970 "data_offset": 2048, 00:16:29.970 "data_size": 63488 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "name": "BaseBdev2", 00:16:29.970 "uuid": "509e60ac-10be-434c-b612-af549d91ab9c", 00:16:29.970 "is_configured": true, 00:16:29.970 "data_offset": 2048, 00:16:29.970 "data_size": 63488 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "name": "BaseBdev3", 00:16:29.970 "uuid": "4dbca618-18bc-4579-b582-64ce3ce416d4", 00:16:29.970 "is_configured": true, 00:16:29.970 "data_offset": 2048, 00:16:29.970 "data_size": 63488 00:16:29.970 }, 00:16:29.970 { 00:16:29.970 "name": "BaseBdev4", 00:16:29.970 "uuid": "cf975cbb-098e-41e0-864d-043d52c296ab", 00:16:29.970 "is_configured": true, 00:16:29.970 "data_offset": 2048, 00:16:29.970 "data_size": 63488 00:16:29.970 } 00:16:29.970 ] 00:16:29.970 } 00:16:29.970 } 00:16:29.970 }' 00:16:29.970 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:30.230 BaseBdev2 00:16:30.230 BaseBdev3 00:16:30.230 BaseBdev4' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.230 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.231 [2024-11-08 16:57:59.740737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.231 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.491 "name": "Existed_Raid", 00:16:30.491 "uuid": "315a7944-52c4-4ecb-b714-1d5f87be5ee3", 00:16:30.491 "strip_size_kb": 64, 00:16:30.491 "state": "online", 00:16:30.491 "raid_level": "raid5f", 00:16:30.491 "superblock": true, 00:16:30.491 "num_base_bdevs": 4, 00:16:30.491 "num_base_bdevs_discovered": 3, 00:16:30.491 "num_base_bdevs_operational": 3, 00:16:30.491 "base_bdevs_list": [ 00:16:30.491 { 00:16:30.491 "name": null, 00:16:30.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.491 "is_configured": false, 00:16:30.491 "data_offset": 0, 00:16:30.491 "data_size": 63488 00:16:30.491 }, 00:16:30.491 { 00:16:30.491 "name": "BaseBdev2", 00:16:30.491 "uuid": "509e60ac-10be-434c-b612-af549d91ab9c", 00:16:30.491 "is_configured": true, 00:16:30.491 "data_offset": 2048, 00:16:30.491 "data_size": 63488 00:16:30.491 }, 00:16:30.491 { 00:16:30.491 "name": "BaseBdev3", 00:16:30.491 "uuid": "4dbca618-18bc-4579-b582-64ce3ce416d4", 00:16:30.491 "is_configured": true, 00:16:30.491 "data_offset": 2048, 00:16:30.491 "data_size": 63488 00:16:30.491 }, 00:16:30.491 { 00:16:30.491 "name": "BaseBdev4", 00:16:30.491 "uuid": "cf975cbb-098e-41e0-864d-043d52c296ab", 00:16:30.491 "is_configured": true, 00:16:30.491 "data_offset": 2048, 00:16:30.491 "data_size": 63488 00:16:30.491 } 00:16:30.491 ] 00:16:30.491 }' 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.491 16:57:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:30.750 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.751 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.010 [2024-11-08 16:58:00.279923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.010 [2024-11-08 16:58:00.280192] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.010 [2024-11-08 16:58:00.291983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.010 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 [2024-11-08 16:58:00.351986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 [2024-11-08 16:58:00.419803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:31.011 [2024-11-08 16:58:00.419947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 BaseBdev2 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.011 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 [ 00:16:31.011 { 00:16:31.011 "name": "BaseBdev2", 00:16:31.011 "aliases": [ 00:16:31.011 "702696b4-5b95-498a-b9d3-ad0ad3a3e985" 00:16:31.011 ], 00:16:31.011 "product_name": "Malloc disk", 00:16:31.011 "block_size": 512, 00:16:31.011 "num_blocks": 65536, 00:16:31.011 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:31.011 "assigned_rate_limits": { 00:16:31.011 "rw_ios_per_sec": 0, 00:16:31.011 "rw_mbytes_per_sec": 0, 00:16:31.011 "r_mbytes_per_sec": 0, 00:16:31.011 "w_mbytes_per_sec": 0 00:16:31.011 }, 00:16:31.011 "claimed": false, 00:16:31.011 "zoned": false, 00:16:31.011 "supported_io_types": { 00:16:31.011 "read": true, 00:16:31.011 "write": true, 00:16:31.011 "unmap": true, 00:16:31.011 "flush": true, 00:16:31.011 "reset": true, 00:16:31.011 "nvme_admin": false, 00:16:31.011 "nvme_io": false, 00:16:31.011 "nvme_io_md": false, 00:16:31.011 "write_zeroes": true, 00:16:31.011 "zcopy": true, 00:16:31.011 "get_zone_info": false, 00:16:31.011 "zone_management": false, 00:16:31.011 "zone_append": false, 00:16:31.011 "compare": false, 00:16:31.011 "compare_and_write": false, 00:16:31.011 "abort": true, 00:16:31.011 "seek_hole": false, 00:16:31.011 "seek_data": false, 00:16:31.011 "copy": true, 00:16:31.011 "nvme_iov_md": false 00:16:31.011 }, 00:16:31.011 "memory_domains": [ 00:16:31.011 { 00:16:31.011 "dma_device_id": "system", 00:16:31.011 "dma_device_type": 1 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.272 "dma_device_type": 2 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "driver_specific": {} 00:16:31.272 } 00:16:31.272 ] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 BaseBdev3 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 [ 00:16:31.272 { 00:16:31.272 "name": "BaseBdev3", 00:16:31.272 "aliases": [ 00:16:31.272 "ad507fa0-c6e1-4177-9cce-db56c8eb2174" 00:16:31.272 ], 00:16:31.272 "product_name": "Malloc disk", 00:16:31.272 "block_size": 512, 00:16:31.272 "num_blocks": 65536, 00:16:31.272 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:31.272 "assigned_rate_limits": { 00:16:31.272 "rw_ios_per_sec": 0, 00:16:31.272 "rw_mbytes_per_sec": 0, 00:16:31.272 "r_mbytes_per_sec": 0, 00:16:31.272 "w_mbytes_per_sec": 0 00:16:31.272 }, 00:16:31.272 "claimed": false, 00:16:31.272 "zoned": false, 00:16:31.272 "supported_io_types": { 00:16:31.272 "read": true, 00:16:31.272 "write": true, 00:16:31.272 "unmap": true, 00:16:31.272 "flush": true, 00:16:31.272 "reset": true, 00:16:31.272 "nvme_admin": false, 00:16:31.272 "nvme_io": false, 00:16:31.272 "nvme_io_md": false, 00:16:31.272 "write_zeroes": true, 00:16:31.272 "zcopy": true, 00:16:31.272 "get_zone_info": false, 00:16:31.272 "zone_management": false, 00:16:31.272 "zone_append": false, 00:16:31.272 "compare": false, 00:16:31.272 "compare_and_write": false, 00:16:31.272 "abort": true, 00:16:31.272 "seek_hole": false, 00:16:31.272 "seek_data": false, 00:16:31.272 "copy": true, 00:16:31.272 "nvme_iov_md": false 00:16:31.272 }, 00:16:31.272 "memory_domains": [ 00:16:31.272 { 00:16:31.272 "dma_device_id": "system", 00:16:31.272 "dma_device_type": 1 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.272 "dma_device_type": 2 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "driver_specific": {} 00:16:31.272 } 00:16:31.272 ] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 BaseBdev4 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 [ 00:16:31.272 { 00:16:31.272 "name": "BaseBdev4", 00:16:31.272 "aliases": [ 00:16:31.272 "e934ac08-d066-4ead-b04a-b17ab1380870" 00:16:31.272 ], 00:16:31.272 "product_name": "Malloc disk", 00:16:31.272 "block_size": 512, 00:16:31.272 "num_blocks": 65536, 00:16:31.272 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:31.272 "assigned_rate_limits": { 00:16:31.272 "rw_ios_per_sec": 0, 00:16:31.272 "rw_mbytes_per_sec": 0, 00:16:31.272 "r_mbytes_per_sec": 0, 00:16:31.272 "w_mbytes_per_sec": 0 00:16:31.272 }, 00:16:31.272 "claimed": false, 00:16:31.272 "zoned": false, 00:16:31.272 "supported_io_types": { 00:16:31.272 "read": true, 00:16:31.272 "write": true, 00:16:31.272 "unmap": true, 00:16:31.272 "flush": true, 00:16:31.272 "reset": true, 00:16:31.272 "nvme_admin": false, 00:16:31.272 "nvme_io": false, 00:16:31.272 "nvme_io_md": false, 00:16:31.272 "write_zeroes": true, 00:16:31.272 "zcopy": true, 00:16:31.272 "get_zone_info": false, 00:16:31.272 "zone_management": false, 00:16:31.272 "zone_append": false, 00:16:31.272 "compare": false, 00:16:31.272 "compare_and_write": false, 00:16:31.272 "abort": true, 00:16:31.272 "seek_hole": false, 00:16:31.272 "seek_data": false, 00:16:31.272 "copy": true, 00:16:31.272 "nvme_iov_md": false 00:16:31.272 }, 00:16:31.272 "memory_domains": [ 00:16:31.272 { 00:16:31.272 "dma_device_id": "system", 00:16:31.272 "dma_device_type": 1 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.272 "dma_device_type": 2 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "driver_specific": {} 00:16:31.272 } 00:16:31.272 ] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 [2024-11-08 16:58:00.664877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.272 [2024-11-08 16:58:00.665036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.272 [2024-11-08 16:58:00.665110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.272 [2024-11-08 16:58:00.667398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.272 [2024-11-08 16:58:00.667524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.272 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.273 "name": "Existed_Raid", 00:16:31.273 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:31.273 "strip_size_kb": 64, 00:16:31.273 "state": "configuring", 00:16:31.273 "raid_level": "raid5f", 00:16:31.273 "superblock": true, 00:16:31.273 "num_base_bdevs": 4, 00:16:31.273 "num_base_bdevs_discovered": 3, 00:16:31.273 "num_base_bdevs_operational": 4, 00:16:31.273 "base_bdevs_list": [ 00:16:31.273 { 00:16:31.273 "name": "BaseBdev1", 00:16:31.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.273 "is_configured": false, 00:16:31.273 "data_offset": 0, 00:16:31.273 "data_size": 0 00:16:31.273 }, 00:16:31.273 { 00:16:31.273 "name": "BaseBdev2", 00:16:31.273 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:31.273 "is_configured": true, 00:16:31.273 "data_offset": 2048, 00:16:31.273 "data_size": 63488 00:16:31.273 }, 00:16:31.273 { 00:16:31.273 "name": "BaseBdev3", 00:16:31.273 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:31.273 "is_configured": true, 00:16:31.273 "data_offset": 2048, 00:16:31.273 "data_size": 63488 00:16:31.273 }, 00:16:31.273 { 00:16:31.273 "name": "BaseBdev4", 00:16:31.273 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:31.273 "is_configured": true, 00:16:31.273 "data_offset": 2048, 00:16:31.273 "data_size": 63488 00:16:31.273 } 00:16:31.273 ] 00:16:31.273 }' 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.273 16:58:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 [2024-11-08 16:58:01.100112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.842 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.843 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.843 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.843 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.843 "name": "Existed_Raid", 00:16:31.843 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:31.843 "strip_size_kb": 64, 00:16:31.843 "state": "configuring", 00:16:31.843 "raid_level": "raid5f", 00:16:31.843 "superblock": true, 00:16:31.843 "num_base_bdevs": 4, 00:16:31.843 "num_base_bdevs_discovered": 2, 00:16:31.843 "num_base_bdevs_operational": 4, 00:16:31.843 "base_bdevs_list": [ 00:16:31.843 { 00:16:31.843 "name": "BaseBdev1", 00:16:31.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.843 "is_configured": false, 00:16:31.843 "data_offset": 0, 00:16:31.843 "data_size": 0 00:16:31.843 }, 00:16:31.843 { 00:16:31.843 "name": null, 00:16:31.843 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:31.843 "is_configured": false, 00:16:31.843 "data_offset": 0, 00:16:31.843 "data_size": 63488 00:16:31.843 }, 00:16:31.843 { 00:16:31.843 "name": "BaseBdev3", 00:16:31.843 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:31.843 "is_configured": true, 00:16:31.843 "data_offset": 2048, 00:16:31.843 "data_size": 63488 00:16:31.843 }, 00:16:31.843 { 00:16:31.843 "name": "BaseBdev4", 00:16:31.843 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:31.843 "is_configured": true, 00:16:31.843 "data_offset": 2048, 00:16:31.843 "data_size": 63488 00:16:31.843 } 00:16:31.843 ] 00:16:31.843 }' 00:16:31.843 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.843 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.102 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.360 [2024-11-08 16:58:01.638886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.360 BaseBdev1 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.360 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.360 [ 00:16:32.360 { 00:16:32.360 "name": "BaseBdev1", 00:16:32.360 "aliases": [ 00:16:32.360 "d5f13de3-d0ea-4bda-83dd-39d65649728b" 00:16:32.360 ], 00:16:32.360 "product_name": "Malloc disk", 00:16:32.360 "block_size": 512, 00:16:32.360 "num_blocks": 65536, 00:16:32.360 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:32.360 "assigned_rate_limits": { 00:16:32.360 "rw_ios_per_sec": 0, 00:16:32.360 "rw_mbytes_per_sec": 0, 00:16:32.360 "r_mbytes_per_sec": 0, 00:16:32.360 "w_mbytes_per_sec": 0 00:16:32.360 }, 00:16:32.360 "claimed": true, 00:16:32.360 "claim_type": "exclusive_write", 00:16:32.360 "zoned": false, 00:16:32.360 "supported_io_types": { 00:16:32.360 "read": true, 00:16:32.360 "write": true, 00:16:32.360 "unmap": true, 00:16:32.360 "flush": true, 00:16:32.360 "reset": true, 00:16:32.360 "nvme_admin": false, 00:16:32.360 "nvme_io": false, 00:16:32.360 "nvme_io_md": false, 00:16:32.360 "write_zeroes": true, 00:16:32.360 "zcopy": true, 00:16:32.360 "get_zone_info": false, 00:16:32.360 "zone_management": false, 00:16:32.360 "zone_append": false, 00:16:32.360 "compare": false, 00:16:32.360 "compare_and_write": false, 00:16:32.360 "abort": true, 00:16:32.360 "seek_hole": false, 00:16:32.360 "seek_data": false, 00:16:32.360 "copy": true, 00:16:32.360 "nvme_iov_md": false 00:16:32.360 }, 00:16:32.360 "memory_domains": [ 00:16:32.360 { 00:16:32.360 "dma_device_id": "system", 00:16:32.360 "dma_device_type": 1 00:16:32.360 }, 00:16:32.360 { 00:16:32.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.360 "dma_device_type": 2 00:16:32.361 } 00:16:32.361 ], 00:16:32.361 "driver_specific": {} 00:16:32.361 } 00:16:32.361 ] 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.361 "name": "Existed_Raid", 00:16:32.361 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:32.361 "strip_size_kb": 64, 00:16:32.361 "state": "configuring", 00:16:32.361 "raid_level": "raid5f", 00:16:32.361 "superblock": true, 00:16:32.361 "num_base_bdevs": 4, 00:16:32.361 "num_base_bdevs_discovered": 3, 00:16:32.361 "num_base_bdevs_operational": 4, 00:16:32.361 "base_bdevs_list": [ 00:16:32.361 { 00:16:32.361 "name": "BaseBdev1", 00:16:32.361 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:32.361 "is_configured": true, 00:16:32.361 "data_offset": 2048, 00:16:32.361 "data_size": 63488 00:16:32.361 }, 00:16:32.361 { 00:16:32.361 "name": null, 00:16:32.361 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:32.361 "is_configured": false, 00:16:32.361 "data_offset": 0, 00:16:32.361 "data_size": 63488 00:16:32.361 }, 00:16:32.361 { 00:16:32.361 "name": "BaseBdev3", 00:16:32.361 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:32.361 "is_configured": true, 00:16:32.361 "data_offset": 2048, 00:16:32.361 "data_size": 63488 00:16:32.361 }, 00:16:32.361 { 00:16:32.361 "name": "BaseBdev4", 00:16:32.361 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:32.361 "is_configured": true, 00:16:32.361 "data_offset": 2048, 00:16:32.361 "data_size": 63488 00:16:32.361 } 00:16:32.361 ] 00:16:32.361 }' 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.361 16:58:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.621 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.621 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.621 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.621 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.621 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.882 [2024-11-08 16:58:02.162093] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.882 "name": "Existed_Raid", 00:16:32.882 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:32.882 "strip_size_kb": 64, 00:16:32.882 "state": "configuring", 00:16:32.882 "raid_level": "raid5f", 00:16:32.882 "superblock": true, 00:16:32.882 "num_base_bdevs": 4, 00:16:32.882 "num_base_bdevs_discovered": 2, 00:16:32.882 "num_base_bdevs_operational": 4, 00:16:32.882 "base_bdevs_list": [ 00:16:32.882 { 00:16:32.882 "name": "BaseBdev1", 00:16:32.882 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:32.882 "is_configured": true, 00:16:32.882 "data_offset": 2048, 00:16:32.882 "data_size": 63488 00:16:32.882 }, 00:16:32.882 { 00:16:32.882 "name": null, 00:16:32.882 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:32.882 "is_configured": false, 00:16:32.882 "data_offset": 0, 00:16:32.882 "data_size": 63488 00:16:32.882 }, 00:16:32.882 { 00:16:32.882 "name": null, 00:16:32.882 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:32.882 "is_configured": false, 00:16:32.882 "data_offset": 0, 00:16:32.882 "data_size": 63488 00:16:32.882 }, 00:16:32.882 { 00:16:32.882 "name": "BaseBdev4", 00:16:32.882 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:32.882 "is_configured": true, 00:16:32.882 "data_offset": 2048, 00:16:32.882 "data_size": 63488 00:16:32.882 } 00:16:32.882 ] 00:16:32.882 }' 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.882 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.142 [2024-11-08 16:58:02.649348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.142 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.401 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.401 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.401 "name": "Existed_Raid", 00:16:33.401 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:33.401 "strip_size_kb": 64, 00:16:33.401 "state": "configuring", 00:16:33.401 "raid_level": "raid5f", 00:16:33.401 "superblock": true, 00:16:33.401 "num_base_bdevs": 4, 00:16:33.401 "num_base_bdevs_discovered": 3, 00:16:33.401 "num_base_bdevs_operational": 4, 00:16:33.401 "base_bdevs_list": [ 00:16:33.401 { 00:16:33.401 "name": "BaseBdev1", 00:16:33.401 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:33.401 "is_configured": true, 00:16:33.401 "data_offset": 2048, 00:16:33.401 "data_size": 63488 00:16:33.401 }, 00:16:33.401 { 00:16:33.401 "name": null, 00:16:33.401 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:33.401 "is_configured": false, 00:16:33.401 "data_offset": 0, 00:16:33.401 "data_size": 63488 00:16:33.401 }, 00:16:33.401 { 00:16:33.401 "name": "BaseBdev3", 00:16:33.401 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:33.401 "is_configured": true, 00:16:33.401 "data_offset": 2048, 00:16:33.401 "data_size": 63488 00:16:33.401 }, 00:16:33.401 { 00:16:33.401 "name": "BaseBdev4", 00:16:33.401 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:33.401 "is_configured": true, 00:16:33.401 "data_offset": 2048, 00:16:33.401 "data_size": 63488 00:16:33.401 } 00:16:33.401 ] 00:16:33.401 }' 00:16:33.401 16:58:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.402 16:58:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.660 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.660 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.660 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.660 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:33.660 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.920 [2024-11-08 16:58:03.212456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.920 "name": "Existed_Raid", 00:16:33.920 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:33.920 "strip_size_kb": 64, 00:16:33.920 "state": "configuring", 00:16:33.920 "raid_level": "raid5f", 00:16:33.920 "superblock": true, 00:16:33.920 "num_base_bdevs": 4, 00:16:33.920 "num_base_bdevs_discovered": 2, 00:16:33.920 "num_base_bdevs_operational": 4, 00:16:33.920 "base_bdevs_list": [ 00:16:33.920 { 00:16:33.920 "name": null, 00:16:33.920 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:33.920 "is_configured": false, 00:16:33.920 "data_offset": 0, 00:16:33.920 "data_size": 63488 00:16:33.920 }, 00:16:33.920 { 00:16:33.920 "name": null, 00:16:33.920 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:33.920 "is_configured": false, 00:16:33.920 "data_offset": 0, 00:16:33.920 "data_size": 63488 00:16:33.920 }, 00:16:33.920 { 00:16:33.920 "name": "BaseBdev3", 00:16:33.920 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:33.920 "is_configured": true, 00:16:33.920 "data_offset": 2048, 00:16:33.920 "data_size": 63488 00:16:33.920 }, 00:16:33.920 { 00:16:33.920 "name": "BaseBdev4", 00:16:33.920 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:33.920 "is_configured": true, 00:16:33.920 "data_offset": 2048, 00:16:33.920 "data_size": 63488 00:16:33.920 } 00:16:33.920 ] 00:16:33.920 }' 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.920 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.179 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.179 [2024-11-08 16:58:03.702835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.438 "name": "Existed_Raid", 00:16:34.438 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:34.438 "strip_size_kb": 64, 00:16:34.438 "state": "configuring", 00:16:34.438 "raid_level": "raid5f", 00:16:34.438 "superblock": true, 00:16:34.438 "num_base_bdevs": 4, 00:16:34.438 "num_base_bdevs_discovered": 3, 00:16:34.438 "num_base_bdevs_operational": 4, 00:16:34.438 "base_bdevs_list": [ 00:16:34.438 { 00:16:34.438 "name": null, 00:16:34.438 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:34.438 "is_configured": false, 00:16:34.438 "data_offset": 0, 00:16:34.438 "data_size": 63488 00:16:34.438 }, 00:16:34.438 { 00:16:34.438 "name": "BaseBdev2", 00:16:34.438 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:34.438 "is_configured": true, 00:16:34.438 "data_offset": 2048, 00:16:34.438 "data_size": 63488 00:16:34.438 }, 00:16:34.438 { 00:16:34.438 "name": "BaseBdev3", 00:16:34.438 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:34.438 "is_configured": true, 00:16:34.438 "data_offset": 2048, 00:16:34.438 "data_size": 63488 00:16:34.438 }, 00:16:34.438 { 00:16:34.438 "name": "BaseBdev4", 00:16:34.438 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:34.438 "is_configured": true, 00:16:34.438 "data_offset": 2048, 00:16:34.438 "data_size": 63488 00:16:34.438 } 00:16:34.438 ] 00:16:34.438 }' 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.438 16:58:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.697 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d5f13de3-d0ea-4bda-83dd-39d65649728b 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.957 [2024-11-08 16:58:04.245378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:34.957 [2024-11-08 16:58:04.245740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:34.957 [2024-11-08 16:58:04.245797] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:34.957 NewBaseBdev 00:16:34.957 [2024-11-08 16:58:04.246126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:34.957 [2024-11-08 16:58:04.246666] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:34.957 [2024-11-08 16:58:04.246732] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:16:34.957 [2024-11-08 16:58:04.246860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.957 [ 00:16:34.957 { 00:16:34.957 "name": "NewBaseBdev", 00:16:34.957 "aliases": [ 00:16:34.957 "d5f13de3-d0ea-4bda-83dd-39d65649728b" 00:16:34.957 ], 00:16:34.957 "product_name": "Malloc disk", 00:16:34.957 "block_size": 512, 00:16:34.957 "num_blocks": 65536, 00:16:34.957 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:34.957 "assigned_rate_limits": { 00:16:34.957 "rw_ios_per_sec": 0, 00:16:34.957 "rw_mbytes_per_sec": 0, 00:16:34.957 "r_mbytes_per_sec": 0, 00:16:34.957 "w_mbytes_per_sec": 0 00:16:34.957 }, 00:16:34.957 "claimed": true, 00:16:34.957 "claim_type": "exclusive_write", 00:16:34.957 "zoned": false, 00:16:34.957 "supported_io_types": { 00:16:34.957 "read": true, 00:16:34.957 "write": true, 00:16:34.957 "unmap": true, 00:16:34.957 "flush": true, 00:16:34.957 "reset": true, 00:16:34.957 "nvme_admin": false, 00:16:34.957 "nvme_io": false, 00:16:34.957 "nvme_io_md": false, 00:16:34.957 "write_zeroes": true, 00:16:34.957 "zcopy": true, 00:16:34.957 "get_zone_info": false, 00:16:34.957 "zone_management": false, 00:16:34.957 "zone_append": false, 00:16:34.957 "compare": false, 00:16:34.957 "compare_and_write": false, 00:16:34.957 "abort": true, 00:16:34.957 "seek_hole": false, 00:16:34.957 "seek_data": false, 00:16:34.957 "copy": true, 00:16:34.957 "nvme_iov_md": false 00:16:34.957 }, 00:16:34.957 "memory_domains": [ 00:16:34.957 { 00:16:34.957 "dma_device_id": "system", 00:16:34.957 "dma_device_type": 1 00:16:34.957 }, 00:16:34.957 { 00:16:34.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.957 "dma_device_type": 2 00:16:34.957 } 00:16:34.957 ], 00:16:34.957 "driver_specific": {} 00:16:34.957 } 00:16:34.957 ] 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.957 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.958 "name": "Existed_Raid", 00:16:34.958 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:34.958 "strip_size_kb": 64, 00:16:34.958 "state": "online", 00:16:34.958 "raid_level": "raid5f", 00:16:34.958 "superblock": true, 00:16:34.958 "num_base_bdevs": 4, 00:16:34.958 "num_base_bdevs_discovered": 4, 00:16:34.958 "num_base_bdevs_operational": 4, 00:16:34.958 "base_bdevs_list": [ 00:16:34.958 { 00:16:34.958 "name": "NewBaseBdev", 00:16:34.958 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:34.958 "is_configured": true, 00:16:34.958 "data_offset": 2048, 00:16:34.958 "data_size": 63488 00:16:34.958 }, 00:16:34.958 { 00:16:34.958 "name": "BaseBdev2", 00:16:34.958 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:34.958 "is_configured": true, 00:16:34.958 "data_offset": 2048, 00:16:34.958 "data_size": 63488 00:16:34.958 }, 00:16:34.958 { 00:16:34.958 "name": "BaseBdev3", 00:16:34.958 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:34.958 "is_configured": true, 00:16:34.958 "data_offset": 2048, 00:16:34.958 "data_size": 63488 00:16:34.958 }, 00:16:34.958 { 00:16:34.958 "name": "BaseBdev4", 00:16:34.958 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:34.958 "is_configured": true, 00:16:34.958 "data_offset": 2048, 00:16:34.958 "data_size": 63488 00:16:34.958 } 00:16:34.958 ] 00:16:34.958 }' 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.958 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.527 [2024-11-08 16:58:04.768901] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.527 "name": "Existed_Raid", 00:16:35.527 "aliases": [ 00:16:35.527 "7a2d39f5-8be5-4075-af90-da26578ecb22" 00:16:35.527 ], 00:16:35.527 "product_name": "Raid Volume", 00:16:35.527 "block_size": 512, 00:16:35.527 "num_blocks": 190464, 00:16:35.527 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:35.527 "assigned_rate_limits": { 00:16:35.527 "rw_ios_per_sec": 0, 00:16:35.527 "rw_mbytes_per_sec": 0, 00:16:35.527 "r_mbytes_per_sec": 0, 00:16:35.527 "w_mbytes_per_sec": 0 00:16:35.527 }, 00:16:35.527 "claimed": false, 00:16:35.527 "zoned": false, 00:16:35.527 "supported_io_types": { 00:16:35.527 "read": true, 00:16:35.527 "write": true, 00:16:35.527 "unmap": false, 00:16:35.527 "flush": false, 00:16:35.527 "reset": true, 00:16:35.527 "nvme_admin": false, 00:16:35.527 "nvme_io": false, 00:16:35.527 "nvme_io_md": false, 00:16:35.527 "write_zeroes": true, 00:16:35.527 "zcopy": false, 00:16:35.527 "get_zone_info": false, 00:16:35.527 "zone_management": false, 00:16:35.527 "zone_append": false, 00:16:35.527 "compare": false, 00:16:35.527 "compare_and_write": false, 00:16:35.527 "abort": false, 00:16:35.527 "seek_hole": false, 00:16:35.527 "seek_data": false, 00:16:35.527 "copy": false, 00:16:35.527 "nvme_iov_md": false 00:16:35.527 }, 00:16:35.527 "driver_specific": { 00:16:35.527 "raid": { 00:16:35.527 "uuid": "7a2d39f5-8be5-4075-af90-da26578ecb22", 00:16:35.527 "strip_size_kb": 64, 00:16:35.527 "state": "online", 00:16:35.527 "raid_level": "raid5f", 00:16:35.527 "superblock": true, 00:16:35.527 "num_base_bdevs": 4, 00:16:35.527 "num_base_bdevs_discovered": 4, 00:16:35.527 "num_base_bdevs_operational": 4, 00:16:35.527 "base_bdevs_list": [ 00:16:35.527 { 00:16:35.527 "name": "NewBaseBdev", 00:16:35.527 "uuid": "d5f13de3-d0ea-4bda-83dd-39d65649728b", 00:16:35.527 "is_configured": true, 00:16:35.527 "data_offset": 2048, 00:16:35.527 "data_size": 63488 00:16:35.527 }, 00:16:35.527 { 00:16:35.527 "name": "BaseBdev2", 00:16:35.527 "uuid": "702696b4-5b95-498a-b9d3-ad0ad3a3e985", 00:16:35.527 "is_configured": true, 00:16:35.527 "data_offset": 2048, 00:16:35.527 "data_size": 63488 00:16:35.527 }, 00:16:35.527 { 00:16:35.527 "name": "BaseBdev3", 00:16:35.527 "uuid": "ad507fa0-c6e1-4177-9cce-db56c8eb2174", 00:16:35.527 "is_configured": true, 00:16:35.527 "data_offset": 2048, 00:16:35.527 "data_size": 63488 00:16:35.527 }, 00:16:35.527 { 00:16:35.527 "name": "BaseBdev4", 00:16:35.527 "uuid": "e934ac08-d066-4ead-b04a-b17ab1380870", 00:16:35.527 "is_configured": true, 00:16:35.527 "data_offset": 2048, 00:16:35.527 "data_size": 63488 00:16:35.527 } 00:16:35.527 ] 00:16:35.527 } 00:16:35.527 } 00:16:35.527 }' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:35.527 BaseBdev2 00:16:35.527 BaseBdev3 00:16:35.527 BaseBdev4' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.527 16:58:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.527 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.787 [2024-11-08 16:58:05.112038] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.787 [2024-11-08 16:58:05.112083] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.787 [2024-11-08 16:58:05.112193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.787 [2024-11-08 16:58:05.112518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.787 [2024-11-08 16:58:05.112533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94001 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94001 ']' 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94001 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94001 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94001' 00:16:35.787 killing process with pid 94001 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94001 00:16:35.787 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94001 00:16:35.787 [2024-11-08 16:58:05.148440] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.787 [2024-11-08 16:58:05.192031] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.048 ************************************ 00:16:36.048 END TEST raid5f_state_function_test_sb 00:16:36.048 ************************************ 00:16:36.048 16:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:36.048 00:16:36.048 real 0m10.104s 00:16:36.048 user 0m17.269s 00:16:36.048 sys 0m2.137s 00:16:36.048 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.048 16:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.048 16:58:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:36.048 16:58:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:36.048 16:58:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.048 16:58:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.048 ************************************ 00:16:36.048 START TEST raid5f_superblock_test 00:16:36.048 ************************************ 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94655 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94655 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94655 ']' 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.048 16:58:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.308 [2024-11-08 16:58:05.618753] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:36.308 [2024-11-08 16:58:05.619398] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94655 ] 00:16:36.308 [2024-11-08 16:58:05.795936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.567 [2024-11-08 16:58:05.847515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.567 [2024-11-08 16:58:05.890558] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.567 [2024-11-08 16:58:05.890708] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 malloc1 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 [2024-11-08 16:58:06.493795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.136 [2024-11-08 16:58:06.493896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.136 [2024-11-08 16:58:06.493921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:37.136 [2024-11-08 16:58:06.493938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.136 [2024-11-08 16:58:06.496211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.136 [2024-11-08 16:58:06.496254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.136 pt1 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 malloc2 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 [2024-11-08 16:58:06.532620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.136 [2024-11-08 16:58:06.532747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.136 [2024-11-08 16:58:06.532782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:37.136 [2024-11-08 16:58:06.532813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.136 [2024-11-08 16:58:06.534955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.136 [2024-11-08 16:58:06.535023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.136 pt2 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 malloc3 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 [2024-11-08 16:58:06.565401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:37.136 [2024-11-08 16:58:06.565500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.136 [2024-11-08 16:58:06.565551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:37.136 [2024-11-08 16:58:06.565581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.136 [2024-11-08 16:58:06.567709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.136 [2024-11-08 16:58:06.567784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:37.136 pt3 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.136 malloc4 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.136 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 [2024-11-08 16:58:06.597910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:37.137 [2024-11-08 16:58:06.597970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.137 [2024-11-08 16:58:06.597992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:37.137 [2024-11-08 16:58:06.598007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.137 [2024-11-08 16:58:06.600318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.137 [2024-11-08 16:58:06.600360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:37.137 pt4 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 [2024-11-08 16:58:06.610034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.137 [2024-11-08 16:58:06.612257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.137 [2024-11-08 16:58:06.612327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:37.137 [2024-11-08 16:58:06.612397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:37.137 [2024-11-08 16:58:06.612594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:37.137 [2024-11-08 16:58:06.612610] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:37.137 [2024-11-08 16:58:06.612966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:37.137 [2024-11-08 16:58:06.613498] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:37.137 [2024-11-08 16:58:06.613569] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:37.137 [2024-11-08 16:58:06.613777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.400 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.400 "name": "raid_bdev1", 00:16:37.400 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:37.400 "strip_size_kb": 64, 00:16:37.400 "state": "online", 00:16:37.400 "raid_level": "raid5f", 00:16:37.400 "superblock": true, 00:16:37.400 "num_base_bdevs": 4, 00:16:37.400 "num_base_bdevs_discovered": 4, 00:16:37.400 "num_base_bdevs_operational": 4, 00:16:37.400 "base_bdevs_list": [ 00:16:37.400 { 00:16:37.400 "name": "pt1", 00:16:37.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.400 "is_configured": true, 00:16:37.400 "data_offset": 2048, 00:16:37.400 "data_size": 63488 00:16:37.400 }, 00:16:37.400 { 00:16:37.400 "name": "pt2", 00:16:37.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.400 "is_configured": true, 00:16:37.400 "data_offset": 2048, 00:16:37.400 "data_size": 63488 00:16:37.400 }, 00:16:37.400 { 00:16:37.400 "name": "pt3", 00:16:37.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.400 "is_configured": true, 00:16:37.400 "data_offset": 2048, 00:16:37.400 "data_size": 63488 00:16:37.400 }, 00:16:37.400 { 00:16:37.400 "name": "pt4", 00:16:37.400 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.400 "is_configured": true, 00:16:37.400 "data_offset": 2048, 00:16:37.400 "data_size": 63488 00:16:37.400 } 00:16:37.400 ] 00:16:37.400 }' 00:16:37.400 16:58:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.400 16:58:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.666 [2024-11-08 16:58:07.097418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:37.666 "name": "raid_bdev1", 00:16:37.666 "aliases": [ 00:16:37.666 "712bdcb6-aa86-413b-8226-c697a1a27a26" 00:16:37.666 ], 00:16:37.666 "product_name": "Raid Volume", 00:16:37.666 "block_size": 512, 00:16:37.666 "num_blocks": 190464, 00:16:37.666 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:37.666 "assigned_rate_limits": { 00:16:37.666 "rw_ios_per_sec": 0, 00:16:37.666 "rw_mbytes_per_sec": 0, 00:16:37.666 "r_mbytes_per_sec": 0, 00:16:37.666 "w_mbytes_per_sec": 0 00:16:37.666 }, 00:16:37.666 "claimed": false, 00:16:37.666 "zoned": false, 00:16:37.666 "supported_io_types": { 00:16:37.666 "read": true, 00:16:37.666 "write": true, 00:16:37.666 "unmap": false, 00:16:37.666 "flush": false, 00:16:37.666 "reset": true, 00:16:37.666 "nvme_admin": false, 00:16:37.666 "nvme_io": false, 00:16:37.666 "nvme_io_md": false, 00:16:37.666 "write_zeroes": true, 00:16:37.666 "zcopy": false, 00:16:37.666 "get_zone_info": false, 00:16:37.666 "zone_management": false, 00:16:37.666 "zone_append": false, 00:16:37.666 "compare": false, 00:16:37.666 "compare_and_write": false, 00:16:37.666 "abort": false, 00:16:37.666 "seek_hole": false, 00:16:37.666 "seek_data": false, 00:16:37.666 "copy": false, 00:16:37.666 "nvme_iov_md": false 00:16:37.666 }, 00:16:37.666 "driver_specific": { 00:16:37.666 "raid": { 00:16:37.666 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:37.666 "strip_size_kb": 64, 00:16:37.666 "state": "online", 00:16:37.666 "raid_level": "raid5f", 00:16:37.666 "superblock": true, 00:16:37.666 "num_base_bdevs": 4, 00:16:37.666 "num_base_bdevs_discovered": 4, 00:16:37.666 "num_base_bdevs_operational": 4, 00:16:37.666 "base_bdevs_list": [ 00:16:37.666 { 00:16:37.666 "name": "pt1", 00:16:37.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.666 "is_configured": true, 00:16:37.666 "data_offset": 2048, 00:16:37.666 "data_size": 63488 00:16:37.666 }, 00:16:37.666 { 00:16:37.666 "name": "pt2", 00:16:37.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.666 "is_configured": true, 00:16:37.666 "data_offset": 2048, 00:16:37.666 "data_size": 63488 00:16:37.666 }, 00:16:37.666 { 00:16:37.666 "name": "pt3", 00:16:37.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.666 "is_configured": true, 00:16:37.666 "data_offset": 2048, 00:16:37.666 "data_size": 63488 00:16:37.666 }, 00:16:37.666 { 00:16:37.666 "name": "pt4", 00:16:37.666 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.666 "is_configured": true, 00:16:37.666 "data_offset": 2048, 00:16:37.666 "data_size": 63488 00:16:37.666 } 00:16:37.666 ] 00:16:37.666 } 00:16:37.666 } 00:16:37.666 }' 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:37.666 pt2 00:16:37.666 pt3 00:16:37.666 pt4' 00:16:37.666 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:37.926 [2024-11-08 16:58:07.420910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.926 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=712bdcb6-aa86-413b-8226-c697a1a27a26 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 712bdcb6-aa86-413b-8226-c697a1a27a26 ']' 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 [2024-11-08 16:58:07.456601] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.185 [2024-11-08 16:58:07.456736] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.185 [2024-11-08 16:58:07.456868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.185 [2024-11-08 16:58:07.456985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.185 [2024-11-08 16:58:07.456998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.185 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 [2024-11-08 16:58:07.636397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:38.186 [2024-11-08 16:58:07.638747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:38.186 [2024-11-08 16:58:07.638807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:38.186 [2024-11-08 16:58:07.638841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:38.186 [2024-11-08 16:58:07.638896] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:38.186 [2024-11-08 16:58:07.638948] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:38.186 [2024-11-08 16:58:07.638971] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:38.186 [2024-11-08 16:58:07.638990] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:38.186 [2024-11-08 16:58:07.639006] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.186 [2024-11-08 16:58:07.639020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:38.186 request: 00:16:38.186 { 00:16:38.186 "name": "raid_bdev1", 00:16:38.186 "raid_level": "raid5f", 00:16:38.186 "base_bdevs": [ 00:16:38.186 "malloc1", 00:16:38.186 "malloc2", 00:16:38.186 "malloc3", 00:16:38.186 "malloc4" 00:16:38.186 ], 00:16:38.186 "strip_size_kb": 64, 00:16:38.186 "superblock": false, 00:16:38.186 "method": "bdev_raid_create", 00:16:38.186 "req_id": 1 00:16:38.186 } 00:16:38.186 Got JSON-RPC error response 00:16:38.186 response: 00:16:38.186 { 00:16:38.186 "code": -17, 00:16:38.186 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:38.186 } 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 [2024-11-08 16:58:07.700200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.186 [2024-11-08 16:58:07.700282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.186 [2024-11-08 16:58:07.700311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:38.186 [2024-11-08 16:58:07.700323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.186 [2024-11-08 16:58:07.702928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.186 [2024-11-08 16:58:07.702980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.186 [2024-11-08 16:58:07.703083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.186 [2024-11-08 16:58:07.703159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.186 pt1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.186 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.446 "name": "raid_bdev1", 00:16:38.446 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:38.446 "strip_size_kb": 64, 00:16:38.446 "state": "configuring", 00:16:38.446 "raid_level": "raid5f", 00:16:38.446 "superblock": true, 00:16:38.446 "num_base_bdevs": 4, 00:16:38.446 "num_base_bdevs_discovered": 1, 00:16:38.446 "num_base_bdevs_operational": 4, 00:16:38.446 "base_bdevs_list": [ 00:16:38.446 { 00:16:38.446 "name": "pt1", 00:16:38.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.446 "is_configured": true, 00:16:38.446 "data_offset": 2048, 00:16:38.446 "data_size": 63488 00:16:38.446 }, 00:16:38.446 { 00:16:38.446 "name": null, 00:16:38.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.446 "is_configured": false, 00:16:38.446 "data_offset": 2048, 00:16:38.446 "data_size": 63488 00:16:38.446 }, 00:16:38.446 { 00:16:38.446 "name": null, 00:16:38.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.446 "is_configured": false, 00:16:38.446 "data_offset": 2048, 00:16:38.446 "data_size": 63488 00:16:38.446 }, 00:16:38.446 { 00:16:38.446 "name": null, 00:16:38.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.446 "is_configured": false, 00:16:38.446 "data_offset": 2048, 00:16:38.446 "data_size": 63488 00:16:38.446 } 00:16:38.446 ] 00:16:38.446 }' 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.446 16:58:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.705 [2024-11-08 16:58:08.183392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.705 [2024-11-08 16:58:08.183536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.705 [2024-11-08 16:58:08.183580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:38.705 [2024-11-08 16:58:08.183613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.705 [2024-11-08 16:58:08.184100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.705 [2024-11-08 16:58:08.184165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.705 [2024-11-08 16:58:08.184288] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:38.705 [2024-11-08 16:58:08.184343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.705 pt2 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.705 [2024-11-08 16:58:08.195423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.705 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.965 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.965 "name": "raid_bdev1", 00:16:38.965 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:38.965 "strip_size_kb": 64, 00:16:38.965 "state": "configuring", 00:16:38.965 "raid_level": "raid5f", 00:16:38.965 "superblock": true, 00:16:38.965 "num_base_bdevs": 4, 00:16:38.965 "num_base_bdevs_discovered": 1, 00:16:38.965 "num_base_bdevs_operational": 4, 00:16:38.965 "base_bdevs_list": [ 00:16:38.965 { 00:16:38.965 "name": "pt1", 00:16:38.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.965 "is_configured": true, 00:16:38.965 "data_offset": 2048, 00:16:38.965 "data_size": 63488 00:16:38.965 }, 00:16:38.965 { 00:16:38.965 "name": null, 00:16:38.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.965 "is_configured": false, 00:16:38.965 "data_offset": 0, 00:16:38.965 "data_size": 63488 00:16:38.965 }, 00:16:38.965 { 00:16:38.965 "name": null, 00:16:38.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.965 "is_configured": false, 00:16:38.965 "data_offset": 2048, 00:16:38.965 "data_size": 63488 00:16:38.965 }, 00:16:38.965 { 00:16:38.965 "name": null, 00:16:38.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.965 "is_configured": false, 00:16:38.965 "data_offset": 2048, 00:16:38.965 "data_size": 63488 00:16:38.965 } 00:16:38.965 ] 00:16:38.965 }' 00:16:38.965 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.965 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.224 [2024-11-08 16:58:08.650621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:39.224 [2024-11-08 16:58:08.650718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.224 [2024-11-08 16:58:08.650738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:39.224 [2024-11-08 16:58:08.650750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.224 [2024-11-08 16:58:08.651215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.224 [2024-11-08 16:58:08.651255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:39.224 [2024-11-08 16:58:08.651343] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:39.224 [2024-11-08 16:58:08.651372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:39.224 pt2 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.224 [2024-11-08 16:58:08.662554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:39.224 [2024-11-08 16:58:08.662648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.224 [2024-11-08 16:58:08.662686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:39.224 [2024-11-08 16:58:08.662697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.224 [2024-11-08 16:58:08.663120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.224 [2024-11-08 16:58:08.663147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:39.224 [2024-11-08 16:58:08.663226] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:39.224 [2024-11-08 16:58:08.663262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:39.224 pt3 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.224 [2024-11-08 16:58:08.674537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:39.224 [2024-11-08 16:58:08.674622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.224 [2024-11-08 16:58:08.674643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:39.224 [2024-11-08 16:58:08.674669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.224 [2024-11-08 16:58:08.675059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.224 [2024-11-08 16:58:08.675085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:39.224 [2024-11-08 16:58:08.675157] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:39.224 [2024-11-08 16:58:08.675198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:39.224 [2024-11-08 16:58:08.675334] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:39.224 [2024-11-08 16:58:08.675347] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:39.224 [2024-11-08 16:58:08.675604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:39.224 [2024-11-08 16:58:08.676144] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:39.224 [2024-11-08 16:58:08.676163] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:39.224 [2024-11-08 16:58:08.676279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.224 pt4 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.224 "name": "raid_bdev1", 00:16:39.224 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:39.224 "strip_size_kb": 64, 00:16:39.224 "state": "online", 00:16:39.224 "raid_level": "raid5f", 00:16:39.224 "superblock": true, 00:16:39.224 "num_base_bdevs": 4, 00:16:39.224 "num_base_bdevs_discovered": 4, 00:16:39.224 "num_base_bdevs_operational": 4, 00:16:39.224 "base_bdevs_list": [ 00:16:39.224 { 00:16:39.224 "name": "pt1", 00:16:39.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.224 "is_configured": true, 00:16:39.224 "data_offset": 2048, 00:16:39.224 "data_size": 63488 00:16:39.224 }, 00:16:39.224 { 00:16:39.224 "name": "pt2", 00:16:39.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.224 "is_configured": true, 00:16:39.224 "data_offset": 2048, 00:16:39.224 "data_size": 63488 00:16:39.224 }, 00:16:39.224 { 00:16:39.224 "name": "pt3", 00:16:39.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.224 "is_configured": true, 00:16:39.224 "data_offset": 2048, 00:16:39.224 "data_size": 63488 00:16:39.224 }, 00:16:39.224 { 00:16:39.224 "name": "pt4", 00:16:39.224 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.224 "is_configured": true, 00:16:39.224 "data_offset": 2048, 00:16:39.224 "data_size": 63488 00:16:39.224 } 00:16:39.224 ] 00:16:39.224 }' 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.224 16:58:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.799 [2024-11-08 16:58:09.130035] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.799 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.799 "name": "raid_bdev1", 00:16:39.799 "aliases": [ 00:16:39.799 "712bdcb6-aa86-413b-8226-c697a1a27a26" 00:16:39.799 ], 00:16:39.799 "product_name": "Raid Volume", 00:16:39.799 "block_size": 512, 00:16:39.799 "num_blocks": 190464, 00:16:39.799 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:39.799 "assigned_rate_limits": { 00:16:39.799 "rw_ios_per_sec": 0, 00:16:39.799 "rw_mbytes_per_sec": 0, 00:16:39.799 "r_mbytes_per_sec": 0, 00:16:39.799 "w_mbytes_per_sec": 0 00:16:39.799 }, 00:16:39.799 "claimed": false, 00:16:39.799 "zoned": false, 00:16:39.799 "supported_io_types": { 00:16:39.799 "read": true, 00:16:39.799 "write": true, 00:16:39.799 "unmap": false, 00:16:39.799 "flush": false, 00:16:39.799 "reset": true, 00:16:39.799 "nvme_admin": false, 00:16:39.799 "nvme_io": false, 00:16:39.799 "nvme_io_md": false, 00:16:39.799 "write_zeroes": true, 00:16:39.799 "zcopy": false, 00:16:39.799 "get_zone_info": false, 00:16:39.799 "zone_management": false, 00:16:39.799 "zone_append": false, 00:16:39.799 "compare": false, 00:16:39.799 "compare_and_write": false, 00:16:39.799 "abort": false, 00:16:39.799 "seek_hole": false, 00:16:39.799 "seek_data": false, 00:16:39.799 "copy": false, 00:16:39.799 "nvme_iov_md": false 00:16:39.799 }, 00:16:39.799 "driver_specific": { 00:16:39.799 "raid": { 00:16:39.799 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:39.799 "strip_size_kb": 64, 00:16:39.799 "state": "online", 00:16:39.799 "raid_level": "raid5f", 00:16:39.799 "superblock": true, 00:16:39.799 "num_base_bdevs": 4, 00:16:39.799 "num_base_bdevs_discovered": 4, 00:16:39.799 "num_base_bdevs_operational": 4, 00:16:39.799 "base_bdevs_list": [ 00:16:39.799 { 00:16:39.799 "name": "pt1", 00:16:39.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.800 "is_configured": true, 00:16:39.800 "data_offset": 2048, 00:16:39.800 "data_size": 63488 00:16:39.800 }, 00:16:39.800 { 00:16:39.800 "name": "pt2", 00:16:39.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.800 "is_configured": true, 00:16:39.800 "data_offset": 2048, 00:16:39.800 "data_size": 63488 00:16:39.800 }, 00:16:39.800 { 00:16:39.800 "name": "pt3", 00:16:39.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.800 "is_configured": true, 00:16:39.800 "data_offset": 2048, 00:16:39.800 "data_size": 63488 00:16:39.800 }, 00:16:39.800 { 00:16:39.800 "name": "pt4", 00:16:39.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.800 "is_configured": true, 00:16:39.800 "data_offset": 2048, 00:16:39.800 "data_size": 63488 00:16:39.800 } 00:16:39.800 ] 00:16:39.800 } 00:16:39.800 } 00:16:39.800 }' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.800 pt2 00:16:39.800 pt3 00:16:39.800 pt4' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.800 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:40.062 [2024-11-08 16:58:09.469465] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 712bdcb6-aa86-413b-8226-c697a1a27a26 '!=' 712bdcb6-aa86-413b-8226-c697a1a27a26 ']' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 [2024-11-08 16:58:09.521205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.062 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.062 "name": "raid_bdev1", 00:16:40.062 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:40.062 "strip_size_kb": 64, 00:16:40.062 "state": "online", 00:16:40.062 "raid_level": "raid5f", 00:16:40.062 "superblock": true, 00:16:40.062 "num_base_bdevs": 4, 00:16:40.062 "num_base_bdevs_discovered": 3, 00:16:40.062 "num_base_bdevs_operational": 3, 00:16:40.062 "base_bdevs_list": [ 00:16:40.062 { 00:16:40.062 "name": null, 00:16:40.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.062 "is_configured": false, 00:16:40.063 "data_offset": 0, 00:16:40.063 "data_size": 63488 00:16:40.063 }, 00:16:40.063 { 00:16:40.063 "name": "pt2", 00:16:40.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.063 "is_configured": true, 00:16:40.063 "data_offset": 2048, 00:16:40.063 "data_size": 63488 00:16:40.063 }, 00:16:40.063 { 00:16:40.063 "name": "pt3", 00:16:40.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.063 "is_configured": true, 00:16:40.063 "data_offset": 2048, 00:16:40.063 "data_size": 63488 00:16:40.063 }, 00:16:40.063 { 00:16:40.063 "name": "pt4", 00:16:40.063 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:40.063 "is_configured": true, 00:16:40.063 "data_offset": 2048, 00:16:40.063 "data_size": 63488 00:16:40.063 } 00:16:40.063 ] 00:16:40.063 }' 00:16:40.063 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.063 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 [2024-11-08 16:58:09.948381] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.635 [2024-11-08 16:58:09.948497] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.635 [2024-11-08 16:58:09.948629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.635 [2024-11-08 16:58:09.948777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.635 [2024-11-08 16:58:09.948828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:40.635 16:58:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 [2024-11-08 16:58:10.048213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.635 [2024-11-08 16:58:10.048362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.635 [2024-11-08 16:58:10.048415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:40.635 [2024-11-08 16:58:10.048454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.635 [2024-11-08 16:58:10.051012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.635 [2024-11-08 16:58:10.051104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.635 [2024-11-08 16:58:10.051231] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.635 [2024-11-08 16:58:10.051317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.635 pt2 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.635 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.635 "name": "raid_bdev1", 00:16:40.635 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:40.635 "strip_size_kb": 64, 00:16:40.635 "state": "configuring", 00:16:40.635 "raid_level": "raid5f", 00:16:40.635 "superblock": true, 00:16:40.635 "num_base_bdevs": 4, 00:16:40.635 "num_base_bdevs_discovered": 1, 00:16:40.635 "num_base_bdevs_operational": 3, 00:16:40.635 "base_bdevs_list": [ 00:16:40.635 { 00:16:40.635 "name": null, 00:16:40.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.635 "is_configured": false, 00:16:40.635 "data_offset": 2048, 00:16:40.635 "data_size": 63488 00:16:40.635 }, 00:16:40.635 { 00:16:40.635 "name": "pt2", 00:16:40.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.635 "is_configured": true, 00:16:40.635 "data_offset": 2048, 00:16:40.635 "data_size": 63488 00:16:40.635 }, 00:16:40.635 { 00:16:40.635 "name": null, 00:16:40.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.635 "is_configured": false, 00:16:40.635 "data_offset": 2048, 00:16:40.635 "data_size": 63488 00:16:40.635 }, 00:16:40.635 { 00:16:40.635 "name": null, 00:16:40.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:40.635 "is_configured": false, 00:16:40.635 "data_offset": 2048, 00:16:40.636 "data_size": 63488 00:16:40.636 } 00:16:40.636 ] 00:16:40.636 }' 00:16:40.636 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.636 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.216 [2024-11-08 16:58:10.519489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.216 [2024-11-08 16:58:10.519683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.216 [2024-11-08 16:58:10.519713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:41.216 [2024-11-08 16:58:10.519730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.216 [2024-11-08 16:58:10.520225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.216 [2024-11-08 16:58:10.520251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.216 [2024-11-08 16:58:10.520343] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:41.216 [2024-11-08 16:58:10.520383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.216 pt3 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.216 "name": "raid_bdev1", 00:16:41.216 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:41.216 "strip_size_kb": 64, 00:16:41.216 "state": "configuring", 00:16:41.216 "raid_level": "raid5f", 00:16:41.216 "superblock": true, 00:16:41.216 "num_base_bdevs": 4, 00:16:41.216 "num_base_bdevs_discovered": 2, 00:16:41.216 "num_base_bdevs_operational": 3, 00:16:41.216 "base_bdevs_list": [ 00:16:41.216 { 00:16:41.216 "name": null, 00:16:41.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.216 "is_configured": false, 00:16:41.216 "data_offset": 2048, 00:16:41.216 "data_size": 63488 00:16:41.216 }, 00:16:41.216 { 00:16:41.216 "name": "pt2", 00:16:41.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.216 "is_configured": true, 00:16:41.216 "data_offset": 2048, 00:16:41.216 "data_size": 63488 00:16:41.216 }, 00:16:41.216 { 00:16:41.216 "name": "pt3", 00:16:41.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.216 "is_configured": true, 00:16:41.216 "data_offset": 2048, 00:16:41.216 "data_size": 63488 00:16:41.216 }, 00:16:41.216 { 00:16:41.216 "name": null, 00:16:41.216 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.216 "is_configured": false, 00:16:41.216 "data_offset": 2048, 00:16:41.216 "data_size": 63488 00:16:41.216 } 00:16:41.216 ] 00:16:41.216 }' 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.216 16:58:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 [2024-11-08 16:58:11.023409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:41.784 [2024-11-08 16:58:11.023597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.784 [2024-11-08 16:58:11.023662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:41.784 [2024-11-08 16:58:11.023725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.784 [2024-11-08 16:58:11.024225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.784 [2024-11-08 16:58:11.024297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:41.784 [2024-11-08 16:58:11.024433] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:41.784 [2024-11-08 16:58:11.024498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:41.784 [2024-11-08 16:58:11.024659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:41.784 [2024-11-08 16:58:11.024710] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:41.784 [2024-11-08 16:58:11.025022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:41.784 [2024-11-08 16:58:11.025710] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:41.784 [2024-11-08 16:58:11.025771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:41.784 [2024-11-08 16:58:11.026094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.784 pt4 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.784 "name": "raid_bdev1", 00:16:41.784 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:41.784 "strip_size_kb": 64, 00:16:41.784 "state": "online", 00:16:41.784 "raid_level": "raid5f", 00:16:41.784 "superblock": true, 00:16:41.784 "num_base_bdevs": 4, 00:16:41.784 "num_base_bdevs_discovered": 3, 00:16:41.784 "num_base_bdevs_operational": 3, 00:16:41.784 "base_bdevs_list": [ 00:16:41.784 { 00:16:41.784 "name": null, 00:16:41.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.784 "is_configured": false, 00:16:41.784 "data_offset": 2048, 00:16:41.784 "data_size": 63488 00:16:41.784 }, 00:16:41.784 { 00:16:41.784 "name": "pt2", 00:16:41.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.784 "is_configured": true, 00:16:41.784 "data_offset": 2048, 00:16:41.784 "data_size": 63488 00:16:41.784 }, 00:16:41.784 { 00:16:41.784 "name": "pt3", 00:16:41.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.784 "is_configured": true, 00:16:41.784 "data_offset": 2048, 00:16:41.784 "data_size": 63488 00:16:41.784 }, 00:16:41.784 { 00:16:41.784 "name": "pt4", 00:16:41.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.784 "is_configured": true, 00:16:41.784 "data_offset": 2048, 00:16:41.784 "data_size": 63488 00:16:41.784 } 00:16:41.784 ] 00:16:41.784 }' 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.784 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 [2024-11-08 16:58:11.495498] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.044 [2024-11-08 16:58:11.495653] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.044 [2024-11-08 16:58:11.495790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.044 [2024-11-08 16:58:11.495928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.044 [2024-11-08 16:58:11.495992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.044 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.305 [2024-11-08 16:58:11.571500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.305 [2024-11-08 16:58:11.571600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.305 [2024-11-08 16:58:11.571649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:42.305 [2024-11-08 16:58:11.571663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.305 [2024-11-08 16:58:11.574639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.305 [2024-11-08 16:58:11.574714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.305 [2024-11-08 16:58:11.574832] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:42.305 [2024-11-08 16:58:11.574896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.305 [2024-11-08 16:58:11.575042] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:42.305 [2024-11-08 16:58:11.575058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.305 [2024-11-08 16:58:11.575091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:42.305 [2024-11-08 16:58:11.575150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.305 [2024-11-08 16:58:11.575340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:42.305 pt1 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.305 "name": "raid_bdev1", 00:16:42.305 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:42.305 "strip_size_kb": 64, 00:16:42.305 "state": "configuring", 00:16:42.305 "raid_level": "raid5f", 00:16:42.305 "superblock": true, 00:16:42.305 "num_base_bdevs": 4, 00:16:42.305 "num_base_bdevs_discovered": 2, 00:16:42.305 "num_base_bdevs_operational": 3, 00:16:42.305 "base_bdevs_list": [ 00:16:42.305 { 00:16:42.305 "name": null, 00:16:42.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.305 "is_configured": false, 00:16:42.305 "data_offset": 2048, 00:16:42.305 "data_size": 63488 00:16:42.305 }, 00:16:42.305 { 00:16:42.305 "name": "pt2", 00:16:42.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.305 "is_configured": true, 00:16:42.305 "data_offset": 2048, 00:16:42.305 "data_size": 63488 00:16:42.305 }, 00:16:42.305 { 00:16:42.305 "name": "pt3", 00:16:42.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.305 "is_configured": true, 00:16:42.305 "data_offset": 2048, 00:16:42.305 "data_size": 63488 00:16:42.305 }, 00:16:42.305 { 00:16:42.305 "name": null, 00:16:42.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.305 "is_configured": false, 00:16:42.305 "data_offset": 2048, 00:16:42.305 "data_size": 63488 00:16:42.305 } 00:16:42.305 ] 00:16:42.305 }' 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.305 16:58:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.564 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:42.564 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:42.564 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.564 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.823 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:42.823 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:42.823 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.823 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 [2024-11-08 16:58:12.127440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:42.823 [2024-11-08 16:58:12.127622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.823 [2024-11-08 16:58:12.127692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:42.823 [2024-11-08 16:58:12.127756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.823 [2024-11-08 16:58:12.128325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.823 [2024-11-08 16:58:12.128405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:42.824 [2024-11-08 16:58:12.128540] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:42.824 [2024-11-08 16:58:12.128610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:42.824 [2024-11-08 16:58:12.128794] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:42.824 [2024-11-08 16:58:12.128855] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:42.824 [2024-11-08 16:58:12.129206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:42.824 [2024-11-08 16:58:12.129980] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:42.824 [2024-11-08 16:58:12.130049] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:42.824 [2024-11-08 16:58:12.130341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.824 pt4 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.824 "name": "raid_bdev1", 00:16:42.824 "uuid": "712bdcb6-aa86-413b-8226-c697a1a27a26", 00:16:42.824 "strip_size_kb": 64, 00:16:42.824 "state": "online", 00:16:42.824 "raid_level": "raid5f", 00:16:42.824 "superblock": true, 00:16:42.824 "num_base_bdevs": 4, 00:16:42.824 "num_base_bdevs_discovered": 3, 00:16:42.824 "num_base_bdevs_operational": 3, 00:16:42.824 "base_bdevs_list": [ 00:16:42.824 { 00:16:42.824 "name": null, 00:16:42.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.824 "is_configured": false, 00:16:42.824 "data_offset": 2048, 00:16:42.824 "data_size": 63488 00:16:42.824 }, 00:16:42.824 { 00:16:42.824 "name": "pt2", 00:16:42.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.824 "is_configured": true, 00:16:42.824 "data_offset": 2048, 00:16:42.824 "data_size": 63488 00:16:42.824 }, 00:16:42.824 { 00:16:42.824 "name": "pt3", 00:16:42.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.824 "is_configured": true, 00:16:42.824 "data_offset": 2048, 00:16:42.824 "data_size": 63488 00:16:42.824 }, 00:16:42.824 { 00:16:42.824 "name": "pt4", 00:16:42.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.824 "is_configured": true, 00:16:42.824 "data_offset": 2048, 00:16:42.824 "data_size": 63488 00:16:42.824 } 00:16:42.824 ] 00:16:42.824 }' 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.824 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.084 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:43.084 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:43.084 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.084 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 [2024-11-08 16:58:12.663872] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 712bdcb6-aa86-413b-8226-c697a1a27a26 '!=' 712bdcb6-aa86-413b-8226-c697a1a27a26 ']' 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94655 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94655 ']' 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94655 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94655 00:16:43.344 killing process with pid 94655 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94655' 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94655 00:16:43.344 16:58:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94655 00:16:43.344 [2024-11-08 16:58:12.732342] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.344 [2024-11-08 16:58:12.732487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.344 [2024-11-08 16:58:12.732620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.344 [2024-11-08 16:58:12.732667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:43.344 [2024-11-08 16:58:12.780555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.603 16:58:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:43.603 00:16:43.603 real 0m7.505s 00:16:43.603 user 0m12.631s 00:16:43.603 sys 0m1.654s 00:16:43.603 ************************************ 00:16:43.603 END TEST raid5f_superblock_test 00:16:43.603 ************************************ 00:16:43.603 16:58:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.603 16:58:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.603 16:58:13 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:43.603 16:58:13 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:43.603 16:58:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:43.603 16:58:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.603 16:58:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.603 ************************************ 00:16:43.603 START TEST raid5f_rebuild_test 00:16:43.603 ************************************ 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95134 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95134 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95134 ']' 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.603 16:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.862 [2024-11-08 16:58:13.197474] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:43.862 [2024-11-08 16:58:13.197851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95134 ] 00:16:43.862 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:43.862 Zero copy mechanism will not be used. 00:16:43.862 [2024-11-08 16:58:13.379441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.122 [2024-11-08 16:58:13.437479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.122 [2024-11-08 16:58:13.483348] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.122 [2024-11-08 16:58:13.483496] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 BaseBdev1_malloc 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 [2024-11-08 16:58:14.149606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:44.705 [2024-11-08 16:58:14.149728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.705 [2024-11-08 16:58:14.149770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:44.705 [2024-11-08 16:58:14.149792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.705 [2024-11-08 16:58:14.152475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.705 [2024-11-08 16:58:14.152527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:44.705 BaseBdev1 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 BaseBdev2_malloc 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 [2024-11-08 16:58:14.179911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:44.705 [2024-11-08 16:58:14.180004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.705 [2024-11-08 16:58:14.180039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:44.705 [2024-11-08 16:58:14.180053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.705 [2024-11-08 16:58:14.183364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.705 [2024-11-08 16:58:14.183514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:44.705 BaseBdev2 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 BaseBdev3_malloc 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 [2024-11-08 16:58:14.201454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:44.705 [2024-11-08 16:58:14.201587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.705 [2024-11-08 16:58:14.201678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:44.705 [2024-11-08 16:58:14.201695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.705 [2024-11-08 16:58:14.204302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.705 [2024-11-08 16:58:14.204353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:44.705 BaseBdev3 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.705 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:44.706 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.706 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.983 BaseBdev4_malloc 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.983 [2024-11-08 16:58:14.223186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:44.983 [2024-11-08 16:58:14.223298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.983 [2024-11-08 16:58:14.223334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:44.983 [2024-11-08 16:58:14.223345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.983 [2024-11-08 16:58:14.225929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.983 [2024-11-08 16:58:14.225982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:44.983 BaseBdev4 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.983 spare_malloc 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.983 spare_delay 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.983 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.983 [2024-11-08 16:58:14.252755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.983 [2024-11-08 16:58:14.252922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.983 [2024-11-08 16:58:14.252960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:44.984 [2024-11-08 16:58:14.252971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.984 [2024-11-08 16:58:14.255583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.984 [2024-11-08 16:58:14.255647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.984 spare 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.984 [2024-11-08 16:58:14.260843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.984 [2024-11-08 16:58:14.262992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.984 [2024-11-08 16:58:14.263074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.984 [2024-11-08 16:58:14.263122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.984 [2024-11-08 16:58:14.263231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:44.984 [2024-11-08 16:58:14.263241] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:44.984 [2024-11-08 16:58:14.263623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:44.984 [2024-11-08 16:58:14.264190] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:44.984 [2024-11-08 16:58:14.264271] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:44.984 [2024-11-08 16:58:14.264461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.984 "name": "raid_bdev1", 00:16:44.984 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:44.984 "strip_size_kb": 64, 00:16:44.984 "state": "online", 00:16:44.984 "raid_level": "raid5f", 00:16:44.984 "superblock": false, 00:16:44.984 "num_base_bdevs": 4, 00:16:44.984 "num_base_bdevs_discovered": 4, 00:16:44.984 "num_base_bdevs_operational": 4, 00:16:44.984 "base_bdevs_list": [ 00:16:44.984 { 00:16:44.984 "name": "BaseBdev1", 00:16:44.984 "uuid": "13d79d9e-28d9-5780-a30a-08db28da3951", 00:16:44.984 "is_configured": true, 00:16:44.984 "data_offset": 0, 00:16:44.984 "data_size": 65536 00:16:44.984 }, 00:16:44.984 { 00:16:44.984 "name": "BaseBdev2", 00:16:44.984 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:44.984 "is_configured": true, 00:16:44.984 "data_offset": 0, 00:16:44.984 "data_size": 65536 00:16:44.984 }, 00:16:44.984 { 00:16:44.984 "name": "BaseBdev3", 00:16:44.984 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:44.984 "is_configured": true, 00:16:44.984 "data_offset": 0, 00:16:44.984 "data_size": 65536 00:16:44.984 }, 00:16:44.984 { 00:16:44.984 "name": "BaseBdev4", 00:16:44.984 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:44.984 "is_configured": true, 00:16:44.984 "data_offset": 0, 00:16:44.984 "data_size": 65536 00:16:44.984 } 00:16:44.984 ] 00:16:44.984 }' 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.984 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.553 [2024-11-08 16:58:14.785805] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.553 16:58:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:45.812 [2024-11-08 16:58:15.085121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.812 /dev/nbd0 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.812 1+0 records in 00:16:45.812 1+0 records out 00:16:45.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628 s, 6.5 MB/s 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:45.812 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:46.381 512+0 records in 00:16:46.381 512+0 records out 00:16:46.382 100663296 bytes (101 MB, 96 MiB) copied, 0.539751 s, 186 MB/s 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.382 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.641 [2024-11-08 16:58:15.967128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.641 [2024-11-08 16:58:15.989584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.641 16:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.641 16:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.641 16:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.641 "name": "raid_bdev1", 00:16:46.641 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:46.641 "strip_size_kb": 64, 00:16:46.641 "state": "online", 00:16:46.641 "raid_level": "raid5f", 00:16:46.641 "superblock": false, 00:16:46.641 "num_base_bdevs": 4, 00:16:46.641 "num_base_bdevs_discovered": 3, 00:16:46.641 "num_base_bdevs_operational": 3, 00:16:46.641 "base_bdevs_list": [ 00:16:46.641 { 00:16:46.641 "name": null, 00:16:46.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.641 "is_configured": false, 00:16:46.641 "data_offset": 0, 00:16:46.641 "data_size": 65536 00:16:46.641 }, 00:16:46.641 { 00:16:46.641 "name": "BaseBdev2", 00:16:46.641 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:46.641 "is_configured": true, 00:16:46.641 "data_offset": 0, 00:16:46.641 "data_size": 65536 00:16:46.641 }, 00:16:46.641 { 00:16:46.641 "name": "BaseBdev3", 00:16:46.641 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:46.641 "is_configured": true, 00:16:46.641 "data_offset": 0, 00:16:46.641 "data_size": 65536 00:16:46.641 }, 00:16:46.641 { 00:16:46.641 "name": "BaseBdev4", 00:16:46.641 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:46.641 "is_configured": true, 00:16:46.641 "data_offset": 0, 00:16:46.641 "data_size": 65536 00:16:46.641 } 00:16:46.641 ] 00:16:46.641 }' 00:16:46.641 16:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.641 16:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.210 16:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.210 16:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.210 16:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.210 [2024-11-08 16:58:16.456877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.210 [2024-11-08 16:58:16.460742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:16:47.210 16:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.210 16:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:47.210 [2024-11-08 16:58:16.463469] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.149 "name": "raid_bdev1", 00:16:48.149 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:48.149 "strip_size_kb": 64, 00:16:48.149 "state": "online", 00:16:48.149 "raid_level": "raid5f", 00:16:48.149 "superblock": false, 00:16:48.149 "num_base_bdevs": 4, 00:16:48.149 "num_base_bdevs_discovered": 4, 00:16:48.149 "num_base_bdevs_operational": 4, 00:16:48.149 "process": { 00:16:48.149 "type": "rebuild", 00:16:48.149 "target": "spare", 00:16:48.149 "progress": { 00:16:48.149 "blocks": 19200, 00:16:48.149 "percent": 9 00:16:48.149 } 00:16:48.149 }, 00:16:48.149 "base_bdevs_list": [ 00:16:48.149 { 00:16:48.149 "name": "spare", 00:16:48.149 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:48.149 "is_configured": true, 00:16:48.149 "data_offset": 0, 00:16:48.149 "data_size": 65536 00:16:48.149 }, 00:16:48.149 { 00:16:48.149 "name": "BaseBdev2", 00:16:48.149 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:48.149 "is_configured": true, 00:16:48.149 "data_offset": 0, 00:16:48.149 "data_size": 65536 00:16:48.149 }, 00:16:48.149 { 00:16:48.149 "name": "BaseBdev3", 00:16:48.149 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:48.149 "is_configured": true, 00:16:48.149 "data_offset": 0, 00:16:48.149 "data_size": 65536 00:16:48.149 }, 00:16:48.149 { 00:16:48.149 "name": "BaseBdev4", 00:16:48.149 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:48.149 "is_configured": true, 00:16:48.149 "data_offset": 0, 00:16:48.149 "data_size": 65536 00:16:48.149 } 00:16:48.149 ] 00:16:48.149 }' 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.149 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 [2024-11-08 16:58:17.616729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.149 [2024-11-08 16:58:17.673593] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.149 [2024-11-08 16:58:17.673726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.149 [2024-11-08 16:58:17.673753] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.149 [2024-11-08 16:58:17.673763] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.465 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.465 "name": "raid_bdev1", 00:16:48.465 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:48.465 "strip_size_kb": 64, 00:16:48.465 "state": "online", 00:16:48.465 "raid_level": "raid5f", 00:16:48.465 "superblock": false, 00:16:48.465 "num_base_bdevs": 4, 00:16:48.465 "num_base_bdevs_discovered": 3, 00:16:48.465 "num_base_bdevs_operational": 3, 00:16:48.465 "base_bdevs_list": [ 00:16:48.466 { 00:16:48.466 "name": null, 00:16:48.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.466 "is_configured": false, 00:16:48.466 "data_offset": 0, 00:16:48.466 "data_size": 65536 00:16:48.466 }, 00:16:48.466 { 00:16:48.466 "name": "BaseBdev2", 00:16:48.466 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:48.466 "is_configured": true, 00:16:48.466 "data_offset": 0, 00:16:48.466 "data_size": 65536 00:16:48.466 }, 00:16:48.466 { 00:16:48.466 "name": "BaseBdev3", 00:16:48.466 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:48.466 "is_configured": true, 00:16:48.466 "data_offset": 0, 00:16:48.466 "data_size": 65536 00:16:48.466 }, 00:16:48.466 { 00:16:48.466 "name": "BaseBdev4", 00:16:48.466 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:48.466 "is_configured": true, 00:16:48.466 "data_offset": 0, 00:16:48.466 "data_size": 65536 00:16:48.466 } 00:16:48.466 ] 00:16:48.466 }' 00:16:48.466 16:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.466 16:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.725 "name": "raid_bdev1", 00:16:48.725 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:48.725 "strip_size_kb": 64, 00:16:48.725 "state": "online", 00:16:48.725 "raid_level": "raid5f", 00:16:48.725 "superblock": false, 00:16:48.725 "num_base_bdevs": 4, 00:16:48.725 "num_base_bdevs_discovered": 3, 00:16:48.725 "num_base_bdevs_operational": 3, 00:16:48.725 "base_bdevs_list": [ 00:16:48.725 { 00:16:48.725 "name": null, 00:16:48.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.725 "is_configured": false, 00:16:48.725 "data_offset": 0, 00:16:48.725 "data_size": 65536 00:16:48.725 }, 00:16:48.725 { 00:16:48.725 "name": "BaseBdev2", 00:16:48.725 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:48.725 "is_configured": true, 00:16:48.725 "data_offset": 0, 00:16:48.725 "data_size": 65536 00:16:48.725 }, 00:16:48.725 { 00:16:48.725 "name": "BaseBdev3", 00:16:48.725 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:48.725 "is_configured": true, 00:16:48.725 "data_offset": 0, 00:16:48.725 "data_size": 65536 00:16:48.725 }, 00:16:48.725 { 00:16:48.725 "name": "BaseBdev4", 00:16:48.725 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:48.725 "is_configured": true, 00:16:48.725 "data_offset": 0, 00:16:48.725 "data_size": 65536 00:16:48.725 } 00:16:48.725 ] 00:16:48.725 }' 00:16:48.725 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.985 [2024-11-08 16:58:18.315522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.985 [2024-11-08 16:58:18.319374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.985 16:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.985 [2024-11-08 16:58:18.322172] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.923 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.923 "name": "raid_bdev1", 00:16:49.923 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:49.923 "strip_size_kb": 64, 00:16:49.923 "state": "online", 00:16:49.923 "raid_level": "raid5f", 00:16:49.923 "superblock": false, 00:16:49.923 "num_base_bdevs": 4, 00:16:49.923 "num_base_bdevs_discovered": 4, 00:16:49.923 "num_base_bdevs_operational": 4, 00:16:49.923 "process": { 00:16:49.924 "type": "rebuild", 00:16:49.924 "target": "spare", 00:16:49.924 "progress": { 00:16:49.924 "blocks": 19200, 00:16:49.924 "percent": 9 00:16:49.924 } 00:16:49.924 }, 00:16:49.924 "base_bdevs_list": [ 00:16:49.924 { 00:16:49.924 "name": "spare", 00:16:49.924 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:49.924 "is_configured": true, 00:16:49.924 "data_offset": 0, 00:16:49.924 "data_size": 65536 00:16:49.924 }, 00:16:49.924 { 00:16:49.924 "name": "BaseBdev2", 00:16:49.924 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:49.924 "is_configured": true, 00:16:49.924 "data_offset": 0, 00:16:49.924 "data_size": 65536 00:16:49.924 }, 00:16:49.924 { 00:16:49.924 "name": "BaseBdev3", 00:16:49.924 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:49.924 "is_configured": true, 00:16:49.924 "data_offset": 0, 00:16:49.924 "data_size": 65536 00:16:49.924 }, 00:16:49.924 { 00:16:49.924 "name": "BaseBdev4", 00:16:49.924 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:49.924 "is_configured": true, 00:16:49.924 "data_offset": 0, 00:16:49.924 "data_size": 65536 00:16:49.924 } 00:16:49.924 ] 00:16:49.924 }' 00:16:49.924 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.924 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.924 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.184 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.184 "name": "raid_bdev1", 00:16:50.184 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:50.184 "strip_size_kb": 64, 00:16:50.184 "state": "online", 00:16:50.185 "raid_level": "raid5f", 00:16:50.185 "superblock": false, 00:16:50.185 "num_base_bdevs": 4, 00:16:50.185 "num_base_bdevs_discovered": 4, 00:16:50.185 "num_base_bdevs_operational": 4, 00:16:50.185 "process": { 00:16:50.185 "type": "rebuild", 00:16:50.185 "target": "spare", 00:16:50.185 "progress": { 00:16:50.185 "blocks": 21120, 00:16:50.185 "percent": 10 00:16:50.185 } 00:16:50.185 }, 00:16:50.185 "base_bdevs_list": [ 00:16:50.185 { 00:16:50.185 "name": "spare", 00:16:50.185 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:50.185 "is_configured": true, 00:16:50.185 "data_offset": 0, 00:16:50.185 "data_size": 65536 00:16:50.185 }, 00:16:50.185 { 00:16:50.185 "name": "BaseBdev2", 00:16:50.185 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:50.185 "is_configured": true, 00:16:50.185 "data_offset": 0, 00:16:50.185 "data_size": 65536 00:16:50.185 }, 00:16:50.185 { 00:16:50.185 "name": "BaseBdev3", 00:16:50.185 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:50.185 "is_configured": true, 00:16:50.185 "data_offset": 0, 00:16:50.185 "data_size": 65536 00:16:50.185 }, 00:16:50.185 { 00:16:50.185 "name": "BaseBdev4", 00:16:50.185 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:50.185 "is_configured": true, 00:16:50.185 "data_offset": 0, 00:16:50.185 "data_size": 65536 00:16:50.185 } 00:16:50.185 ] 00:16:50.185 }' 00:16:50.185 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.185 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.185 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.185 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.185 16:58:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.121 16:58:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.380 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.380 "name": "raid_bdev1", 00:16:51.380 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:51.380 "strip_size_kb": 64, 00:16:51.380 "state": "online", 00:16:51.380 "raid_level": "raid5f", 00:16:51.380 "superblock": false, 00:16:51.380 "num_base_bdevs": 4, 00:16:51.380 "num_base_bdevs_discovered": 4, 00:16:51.380 "num_base_bdevs_operational": 4, 00:16:51.380 "process": { 00:16:51.380 "type": "rebuild", 00:16:51.380 "target": "spare", 00:16:51.380 "progress": { 00:16:51.380 "blocks": 42240, 00:16:51.380 "percent": 21 00:16:51.380 } 00:16:51.380 }, 00:16:51.380 "base_bdevs_list": [ 00:16:51.380 { 00:16:51.380 "name": "spare", 00:16:51.380 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:51.380 "is_configured": true, 00:16:51.380 "data_offset": 0, 00:16:51.380 "data_size": 65536 00:16:51.380 }, 00:16:51.380 { 00:16:51.380 "name": "BaseBdev2", 00:16:51.380 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:51.380 "is_configured": true, 00:16:51.380 "data_offset": 0, 00:16:51.380 "data_size": 65536 00:16:51.380 }, 00:16:51.380 { 00:16:51.380 "name": "BaseBdev3", 00:16:51.380 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:51.380 "is_configured": true, 00:16:51.380 "data_offset": 0, 00:16:51.380 "data_size": 65536 00:16:51.380 }, 00:16:51.380 { 00:16:51.380 "name": "BaseBdev4", 00:16:51.380 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:51.380 "is_configured": true, 00:16:51.380 "data_offset": 0, 00:16:51.380 "data_size": 65536 00:16:51.380 } 00:16:51.380 ] 00:16:51.380 }' 00:16:51.380 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.380 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.380 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.380 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.380 16:58:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.352 "name": "raid_bdev1", 00:16:52.352 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:52.352 "strip_size_kb": 64, 00:16:52.352 "state": "online", 00:16:52.352 "raid_level": "raid5f", 00:16:52.352 "superblock": false, 00:16:52.352 "num_base_bdevs": 4, 00:16:52.352 "num_base_bdevs_discovered": 4, 00:16:52.352 "num_base_bdevs_operational": 4, 00:16:52.352 "process": { 00:16:52.352 "type": "rebuild", 00:16:52.352 "target": "spare", 00:16:52.352 "progress": { 00:16:52.352 "blocks": 65280, 00:16:52.352 "percent": 33 00:16:52.352 } 00:16:52.352 }, 00:16:52.352 "base_bdevs_list": [ 00:16:52.352 { 00:16:52.352 "name": "spare", 00:16:52.352 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:52.352 "is_configured": true, 00:16:52.352 "data_offset": 0, 00:16:52.352 "data_size": 65536 00:16:52.352 }, 00:16:52.352 { 00:16:52.352 "name": "BaseBdev2", 00:16:52.352 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:52.352 "is_configured": true, 00:16:52.352 "data_offset": 0, 00:16:52.352 "data_size": 65536 00:16:52.352 }, 00:16:52.352 { 00:16:52.352 "name": "BaseBdev3", 00:16:52.352 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:52.352 "is_configured": true, 00:16:52.352 "data_offset": 0, 00:16:52.352 "data_size": 65536 00:16:52.352 }, 00:16:52.352 { 00:16:52.352 "name": "BaseBdev4", 00:16:52.352 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:52.352 "is_configured": true, 00:16:52.352 "data_offset": 0, 00:16:52.352 "data_size": 65536 00:16:52.352 } 00:16:52.352 ] 00:16:52.352 }' 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.352 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.612 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.612 16:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.549 "name": "raid_bdev1", 00:16:53.549 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:53.549 "strip_size_kb": 64, 00:16:53.549 "state": "online", 00:16:53.549 "raid_level": "raid5f", 00:16:53.549 "superblock": false, 00:16:53.549 "num_base_bdevs": 4, 00:16:53.549 "num_base_bdevs_discovered": 4, 00:16:53.549 "num_base_bdevs_operational": 4, 00:16:53.549 "process": { 00:16:53.549 "type": "rebuild", 00:16:53.549 "target": "spare", 00:16:53.549 "progress": { 00:16:53.549 "blocks": 86400, 00:16:53.549 "percent": 43 00:16:53.549 } 00:16:53.549 }, 00:16:53.549 "base_bdevs_list": [ 00:16:53.549 { 00:16:53.549 "name": "spare", 00:16:53.549 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:53.549 "is_configured": true, 00:16:53.549 "data_offset": 0, 00:16:53.549 "data_size": 65536 00:16:53.549 }, 00:16:53.549 { 00:16:53.549 "name": "BaseBdev2", 00:16:53.549 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:53.549 "is_configured": true, 00:16:53.549 "data_offset": 0, 00:16:53.549 "data_size": 65536 00:16:53.549 }, 00:16:53.549 { 00:16:53.549 "name": "BaseBdev3", 00:16:53.549 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:53.549 "is_configured": true, 00:16:53.549 "data_offset": 0, 00:16:53.549 "data_size": 65536 00:16:53.549 }, 00:16:53.549 { 00:16:53.549 "name": "BaseBdev4", 00:16:53.549 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:53.549 "is_configured": true, 00:16:53.549 "data_offset": 0, 00:16:53.549 "data_size": 65536 00:16:53.549 } 00:16:53.549 ] 00:16:53.549 }' 00:16:53.549 16:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.549 16:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.549 16:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.549 16:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.549 16:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.929 "name": "raid_bdev1", 00:16:54.929 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:54.929 "strip_size_kb": 64, 00:16:54.929 "state": "online", 00:16:54.929 "raid_level": "raid5f", 00:16:54.929 "superblock": false, 00:16:54.929 "num_base_bdevs": 4, 00:16:54.929 "num_base_bdevs_discovered": 4, 00:16:54.929 "num_base_bdevs_operational": 4, 00:16:54.929 "process": { 00:16:54.929 "type": "rebuild", 00:16:54.929 "target": "spare", 00:16:54.929 "progress": { 00:16:54.929 "blocks": 109440, 00:16:54.929 "percent": 55 00:16:54.929 } 00:16:54.929 }, 00:16:54.929 "base_bdevs_list": [ 00:16:54.929 { 00:16:54.929 "name": "spare", 00:16:54.929 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:54.929 "is_configured": true, 00:16:54.929 "data_offset": 0, 00:16:54.929 "data_size": 65536 00:16:54.929 }, 00:16:54.929 { 00:16:54.929 "name": "BaseBdev2", 00:16:54.929 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:54.929 "is_configured": true, 00:16:54.929 "data_offset": 0, 00:16:54.929 "data_size": 65536 00:16:54.929 }, 00:16:54.929 { 00:16:54.929 "name": "BaseBdev3", 00:16:54.929 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:54.929 "is_configured": true, 00:16:54.929 "data_offset": 0, 00:16:54.929 "data_size": 65536 00:16:54.929 }, 00:16:54.929 { 00:16:54.929 "name": "BaseBdev4", 00:16:54.929 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:54.929 "is_configured": true, 00:16:54.929 "data_offset": 0, 00:16:54.929 "data_size": 65536 00:16:54.929 } 00:16:54.929 ] 00:16:54.929 }' 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.929 16:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.867 "name": "raid_bdev1", 00:16:55.867 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:55.867 "strip_size_kb": 64, 00:16:55.867 "state": "online", 00:16:55.867 "raid_level": "raid5f", 00:16:55.867 "superblock": false, 00:16:55.867 "num_base_bdevs": 4, 00:16:55.867 "num_base_bdevs_discovered": 4, 00:16:55.867 "num_base_bdevs_operational": 4, 00:16:55.867 "process": { 00:16:55.867 "type": "rebuild", 00:16:55.867 "target": "spare", 00:16:55.867 "progress": { 00:16:55.867 "blocks": 130560, 00:16:55.867 "percent": 66 00:16:55.867 } 00:16:55.867 }, 00:16:55.867 "base_bdevs_list": [ 00:16:55.867 { 00:16:55.867 "name": "spare", 00:16:55.867 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:55.867 "is_configured": true, 00:16:55.867 "data_offset": 0, 00:16:55.867 "data_size": 65536 00:16:55.867 }, 00:16:55.867 { 00:16:55.867 "name": "BaseBdev2", 00:16:55.867 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:55.867 "is_configured": true, 00:16:55.867 "data_offset": 0, 00:16:55.867 "data_size": 65536 00:16:55.867 }, 00:16:55.867 { 00:16:55.867 "name": "BaseBdev3", 00:16:55.867 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:55.867 "is_configured": true, 00:16:55.867 "data_offset": 0, 00:16:55.867 "data_size": 65536 00:16:55.867 }, 00:16:55.867 { 00:16:55.867 "name": "BaseBdev4", 00:16:55.867 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:55.867 "is_configured": true, 00:16:55.867 "data_offset": 0, 00:16:55.867 "data_size": 65536 00:16:55.867 } 00:16:55.867 ] 00:16:55.867 }' 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.867 16:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.251 "name": "raid_bdev1", 00:16:57.251 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:57.251 "strip_size_kb": 64, 00:16:57.251 "state": "online", 00:16:57.251 "raid_level": "raid5f", 00:16:57.251 "superblock": false, 00:16:57.251 "num_base_bdevs": 4, 00:16:57.251 "num_base_bdevs_discovered": 4, 00:16:57.251 "num_base_bdevs_operational": 4, 00:16:57.251 "process": { 00:16:57.251 "type": "rebuild", 00:16:57.251 "target": "spare", 00:16:57.251 "progress": { 00:16:57.251 "blocks": 151680, 00:16:57.251 "percent": 77 00:16:57.251 } 00:16:57.251 }, 00:16:57.251 "base_bdevs_list": [ 00:16:57.251 { 00:16:57.251 "name": "spare", 00:16:57.251 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:57.251 "is_configured": true, 00:16:57.251 "data_offset": 0, 00:16:57.251 "data_size": 65536 00:16:57.251 }, 00:16:57.251 { 00:16:57.251 "name": "BaseBdev2", 00:16:57.251 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:57.251 "is_configured": true, 00:16:57.251 "data_offset": 0, 00:16:57.251 "data_size": 65536 00:16:57.251 }, 00:16:57.251 { 00:16:57.251 "name": "BaseBdev3", 00:16:57.251 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:57.251 "is_configured": true, 00:16:57.251 "data_offset": 0, 00:16:57.251 "data_size": 65536 00:16:57.251 }, 00:16:57.251 { 00:16:57.251 "name": "BaseBdev4", 00:16:57.251 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:57.251 "is_configured": true, 00:16:57.251 "data_offset": 0, 00:16:57.251 "data_size": 65536 00:16:57.251 } 00:16:57.251 ] 00:16:57.251 }' 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.251 16:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.189 "name": "raid_bdev1", 00:16:58.189 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:58.189 "strip_size_kb": 64, 00:16:58.189 "state": "online", 00:16:58.189 "raid_level": "raid5f", 00:16:58.189 "superblock": false, 00:16:58.189 "num_base_bdevs": 4, 00:16:58.189 "num_base_bdevs_discovered": 4, 00:16:58.189 "num_base_bdevs_operational": 4, 00:16:58.189 "process": { 00:16:58.189 "type": "rebuild", 00:16:58.189 "target": "spare", 00:16:58.189 "progress": { 00:16:58.189 "blocks": 174720, 00:16:58.189 "percent": 88 00:16:58.189 } 00:16:58.189 }, 00:16:58.189 "base_bdevs_list": [ 00:16:58.189 { 00:16:58.189 "name": "spare", 00:16:58.189 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:58.189 "is_configured": true, 00:16:58.189 "data_offset": 0, 00:16:58.189 "data_size": 65536 00:16:58.189 }, 00:16:58.189 { 00:16:58.189 "name": "BaseBdev2", 00:16:58.189 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:58.189 "is_configured": true, 00:16:58.189 "data_offset": 0, 00:16:58.189 "data_size": 65536 00:16:58.189 }, 00:16:58.189 { 00:16:58.189 "name": "BaseBdev3", 00:16:58.189 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:58.189 "is_configured": true, 00:16:58.189 "data_offset": 0, 00:16:58.189 "data_size": 65536 00:16:58.189 }, 00:16:58.189 { 00:16:58.189 "name": "BaseBdev4", 00:16:58.189 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:58.189 "is_configured": true, 00:16:58.189 "data_offset": 0, 00:16:58.189 "data_size": 65536 00:16:58.189 } 00:16:58.189 ] 00:16:58.189 }' 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.189 16:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.567 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.567 "name": "raid_bdev1", 00:16:59.567 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:16:59.568 "strip_size_kb": 64, 00:16:59.568 "state": "online", 00:16:59.568 "raid_level": "raid5f", 00:16:59.568 "superblock": false, 00:16:59.568 "num_base_bdevs": 4, 00:16:59.568 "num_base_bdevs_discovered": 4, 00:16:59.568 "num_base_bdevs_operational": 4, 00:16:59.568 "process": { 00:16:59.568 "type": "rebuild", 00:16:59.568 "target": "spare", 00:16:59.568 "progress": { 00:16:59.568 "blocks": 195840, 00:16:59.568 "percent": 99 00:16:59.568 } 00:16:59.568 }, 00:16:59.568 "base_bdevs_list": [ 00:16:59.568 { 00:16:59.568 "name": "spare", 00:16:59.568 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:16:59.568 "is_configured": true, 00:16:59.568 "data_offset": 0, 00:16:59.568 "data_size": 65536 00:16:59.568 }, 00:16:59.568 { 00:16:59.568 "name": "BaseBdev2", 00:16:59.568 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:16:59.568 "is_configured": true, 00:16:59.568 "data_offset": 0, 00:16:59.568 "data_size": 65536 00:16:59.568 }, 00:16:59.568 { 00:16:59.568 "name": "BaseBdev3", 00:16:59.568 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:16:59.568 "is_configured": true, 00:16:59.568 "data_offset": 0, 00:16:59.568 "data_size": 65536 00:16:59.568 }, 00:16:59.568 { 00:16:59.568 "name": "BaseBdev4", 00:16:59.568 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:16:59.568 "is_configured": true, 00:16:59.568 "data_offset": 0, 00:16:59.568 "data_size": 65536 00:16:59.568 } 00:16:59.568 ] 00:16:59.568 }' 00:16:59.568 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.568 [2024-11-08 16:58:28.706621] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:59.568 [2024-11-08 16:58:28.706792] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:59.568 [2024-11-08 16:58:28.706877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.568 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.568 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.568 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.568 16:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.523 "name": "raid_bdev1", 00:17:00.523 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:17:00.523 "strip_size_kb": 64, 00:17:00.523 "state": "online", 00:17:00.523 "raid_level": "raid5f", 00:17:00.523 "superblock": false, 00:17:00.523 "num_base_bdevs": 4, 00:17:00.523 "num_base_bdevs_discovered": 4, 00:17:00.523 "num_base_bdevs_operational": 4, 00:17:00.523 "base_bdevs_list": [ 00:17:00.523 { 00:17:00.523 "name": "spare", 00:17:00.523 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:17:00.523 "is_configured": true, 00:17:00.523 "data_offset": 0, 00:17:00.523 "data_size": 65536 00:17:00.523 }, 00:17:00.523 { 00:17:00.523 "name": "BaseBdev2", 00:17:00.523 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:17:00.523 "is_configured": true, 00:17:00.523 "data_offset": 0, 00:17:00.523 "data_size": 65536 00:17:00.523 }, 00:17:00.523 { 00:17:00.523 "name": "BaseBdev3", 00:17:00.523 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:17:00.523 "is_configured": true, 00:17:00.523 "data_offset": 0, 00:17:00.523 "data_size": 65536 00:17:00.523 }, 00:17:00.523 { 00:17:00.523 "name": "BaseBdev4", 00:17:00.523 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:17:00.523 "is_configured": true, 00:17:00.523 "data_offset": 0, 00:17:00.523 "data_size": 65536 00:17:00.523 } 00:17:00.523 ] 00:17:00.523 }' 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:00.523 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.524 "name": "raid_bdev1", 00:17:00.524 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:17:00.524 "strip_size_kb": 64, 00:17:00.524 "state": "online", 00:17:00.524 "raid_level": "raid5f", 00:17:00.524 "superblock": false, 00:17:00.524 "num_base_bdevs": 4, 00:17:00.524 "num_base_bdevs_discovered": 4, 00:17:00.524 "num_base_bdevs_operational": 4, 00:17:00.524 "base_bdevs_list": [ 00:17:00.524 { 00:17:00.524 "name": "spare", 00:17:00.524 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:17:00.524 "is_configured": true, 00:17:00.524 "data_offset": 0, 00:17:00.524 "data_size": 65536 00:17:00.524 }, 00:17:00.524 { 00:17:00.524 "name": "BaseBdev2", 00:17:00.524 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:17:00.524 "is_configured": true, 00:17:00.524 "data_offset": 0, 00:17:00.524 "data_size": 65536 00:17:00.524 }, 00:17:00.524 { 00:17:00.524 "name": "BaseBdev3", 00:17:00.524 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:17:00.524 "is_configured": true, 00:17:00.524 "data_offset": 0, 00:17:00.524 "data_size": 65536 00:17:00.524 }, 00:17:00.524 { 00:17:00.524 "name": "BaseBdev4", 00:17:00.524 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:17:00.524 "is_configured": true, 00:17:00.524 "data_offset": 0, 00:17:00.524 "data_size": 65536 00:17:00.524 } 00:17:00.524 ] 00:17:00.524 }' 00:17:00.524 16:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.524 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.524 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.783 "name": "raid_bdev1", 00:17:00.783 "uuid": "5e1ced3b-d15d-442b-924e-77b992a2668c", 00:17:00.783 "strip_size_kb": 64, 00:17:00.783 "state": "online", 00:17:00.783 "raid_level": "raid5f", 00:17:00.783 "superblock": false, 00:17:00.783 "num_base_bdevs": 4, 00:17:00.783 "num_base_bdevs_discovered": 4, 00:17:00.783 "num_base_bdevs_operational": 4, 00:17:00.783 "base_bdevs_list": [ 00:17:00.783 { 00:17:00.783 "name": "spare", 00:17:00.783 "uuid": "145b0009-dc4f-5c55-8f3d-126fa9e48b01", 00:17:00.783 "is_configured": true, 00:17:00.783 "data_offset": 0, 00:17:00.783 "data_size": 65536 00:17:00.783 }, 00:17:00.783 { 00:17:00.783 "name": "BaseBdev2", 00:17:00.783 "uuid": "8a2158ed-9b24-5e61-b645-0b5ec2e835c6", 00:17:00.783 "is_configured": true, 00:17:00.783 "data_offset": 0, 00:17:00.783 "data_size": 65536 00:17:00.783 }, 00:17:00.783 { 00:17:00.783 "name": "BaseBdev3", 00:17:00.783 "uuid": "b0a338bc-4b56-5bfd-aefd-1754b3604eec", 00:17:00.783 "is_configured": true, 00:17:00.783 "data_offset": 0, 00:17:00.783 "data_size": 65536 00:17:00.783 }, 00:17:00.783 { 00:17:00.783 "name": "BaseBdev4", 00:17:00.783 "uuid": "425b1632-f2f6-54de-ae5d-3db7696feb54", 00:17:00.783 "is_configured": true, 00:17:00.783 "data_offset": 0, 00:17:00.783 "data_size": 65536 00:17:00.783 } 00:17:00.783 ] 00:17:00.783 }' 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.783 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.043 [2024-11-08 16:58:30.445779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.043 [2024-11-08 16:58:30.445885] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.043 [2024-11-08 16:58:30.446018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.043 [2024-11-08 16:58:30.446144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.043 [2024-11-08 16:58:30.446210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.043 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:01.302 /dev/nbd0 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:01.302 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.303 1+0 records in 00:17:01.303 1+0 records out 00:17:01.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566709 s, 7.2 MB/s 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.303 16:58:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:01.562 /dev/nbd1 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.562 1+0 records in 00:17:01.562 1+0 records out 00:17:01.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400865 s, 10.2 MB/s 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.562 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.822 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95134 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95134 ']' 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95134 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.081 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95134 00:17:02.340 killing process with pid 95134 00:17:02.340 Received shutdown signal, test time was about 60.000000 seconds 00:17:02.340 00:17:02.340 Latency(us) 00:17:02.340 [2024-11-08T16:58:31.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.340 [2024-11-08T16:58:31.868Z] =================================================================================================================== 00:17:02.340 [2024-11-08T16:58:31.868Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.340 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.340 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.340 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95134' 00:17:02.340 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95134 00:17:02.340 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95134 00:17:02.340 [2024-11-08 16:58:31.618974] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.340 [2024-11-08 16:58:31.672621] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:02.600 00:17:02.600 real 0m18.826s 00:17:02.600 user 0m22.814s 00:17:02.600 sys 0m2.482s 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.600 ************************************ 00:17:02.600 END TEST raid5f_rebuild_test 00:17:02.600 ************************************ 00:17:02.600 16:58:31 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:02.600 16:58:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:02.600 16:58:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.600 16:58:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.600 ************************************ 00:17:02.600 START TEST raid5f_rebuild_test_sb 00:17:02.600 ************************************ 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95637 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95637 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95637 ']' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.600 16:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.600 [2024-11-08 16:58:32.070029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:02.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:02.600 Zero copy mechanism will not be used. 00:17:02.600 [2024-11-08 16:58:32.070210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95637 ] 00:17:02.881 [2024-11-08 16:58:32.238222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.881 [2024-11-08 16:58:32.292451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.881 [2024-11-08 16:58:32.337219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.881 [2024-11-08 16:58:32.337265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.449 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.449 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:03.449 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.449 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:03.449 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.449 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 BaseBdev1_malloc 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 [2024-11-08 16:58:32.985507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:03.708 [2024-11-08 16:58:32.985587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.708 [2024-11-08 16:58:32.985620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.708 [2024-11-08 16:58:32.985650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.708 [2024-11-08 16:58:32.988251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.708 [2024-11-08 16:58:32.988299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.708 BaseBdev1 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 BaseBdev2_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 [2024-11-08 16:58:33.025008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:03.708 [2024-11-08 16:58:33.025098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.708 [2024-11-08 16:58:33.025125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:03.708 [2024-11-08 16:58:33.025136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.708 [2024-11-08 16:58:33.027529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.708 [2024-11-08 16:58:33.027572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:03.708 BaseBdev2 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 BaseBdev3_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 [2024-11-08 16:58:33.054344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:03.708 [2024-11-08 16:58:33.054409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.708 [2024-11-08 16:58:33.054438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:03.708 [2024-11-08 16:58:33.054447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.708 [2024-11-08 16:58:33.056785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.708 [2024-11-08 16:58:33.056821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:03.708 BaseBdev3 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 BaseBdev4_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 [2024-11-08 16:58:33.083697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:03.708 [2024-11-08 16:58:33.083776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.708 [2024-11-08 16:58:33.083810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:03.708 [2024-11-08 16:58:33.083820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.708 [2024-11-08 16:58:33.086385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.708 [2024-11-08 16:58:33.086431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:03.708 BaseBdev4 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 spare_malloc 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 spare_delay 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.708 [2024-11-08 16:58:33.125213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.708 [2024-11-08 16:58:33.125306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.708 [2024-11-08 16:58:33.125339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:03.708 [2024-11-08 16:58:33.125349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.708 [2024-11-08 16:58:33.127984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.708 [2024-11-08 16:58:33.128039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.708 spare 00:17:03.708 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.709 [2024-11-08 16:58:33.137327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.709 [2024-11-08 16:58:33.139564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.709 [2024-11-08 16:58:33.139673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.709 [2024-11-08 16:58:33.139725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.709 [2024-11-08 16:58:33.139947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:03.709 [2024-11-08 16:58:33.139972] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.709 [2024-11-08 16:58:33.140331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:03.709 [2024-11-08 16:58:33.140915] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:03.709 [2024-11-08 16:58:33.140941] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:03.709 [2024-11-08 16:58:33.141198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.709 "name": "raid_bdev1", 00:17:03.709 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:03.709 "strip_size_kb": 64, 00:17:03.709 "state": "online", 00:17:03.709 "raid_level": "raid5f", 00:17:03.709 "superblock": true, 00:17:03.709 "num_base_bdevs": 4, 00:17:03.709 "num_base_bdevs_discovered": 4, 00:17:03.709 "num_base_bdevs_operational": 4, 00:17:03.709 "base_bdevs_list": [ 00:17:03.709 { 00:17:03.709 "name": "BaseBdev1", 00:17:03.709 "uuid": "9249e3d5-0204-5779-a0bd-7736ab4e3dcf", 00:17:03.709 "is_configured": true, 00:17:03.709 "data_offset": 2048, 00:17:03.709 "data_size": 63488 00:17:03.709 }, 00:17:03.709 { 00:17:03.709 "name": "BaseBdev2", 00:17:03.709 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:03.709 "is_configured": true, 00:17:03.709 "data_offset": 2048, 00:17:03.709 "data_size": 63488 00:17:03.709 }, 00:17:03.709 { 00:17:03.709 "name": "BaseBdev3", 00:17:03.709 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:03.709 "is_configured": true, 00:17:03.709 "data_offset": 2048, 00:17:03.709 "data_size": 63488 00:17:03.709 }, 00:17:03.709 { 00:17:03.709 "name": "BaseBdev4", 00:17:03.709 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:03.709 "is_configured": true, 00:17:03.709 "data_offset": 2048, 00:17:03.709 "data_size": 63488 00:17:03.709 } 00:17:03.709 ] 00:17:03.709 }' 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.709 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:04.275 [2024-11-08 16:58:33.636666] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.275 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:04.534 [2024-11-08 16:58:33.947953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:04.534 /dev/nbd0 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:04.534 16:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.534 1+0 records in 00:17:04.534 1+0 records out 00:17:04.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272405 s, 15.0 MB/s 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:04.534 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:05.100 496+0 records in 00:17:05.100 496+0 records out 00:17:05.100 97517568 bytes (98 MB, 93 MiB) copied, 0.429653 s, 227 MB/s 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.100 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.358 [2024-11-08 16:58:34.711042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.358 [2024-11-08 16:58:34.731084] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.358 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.358 "name": "raid_bdev1", 00:17:05.358 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:05.358 "strip_size_kb": 64, 00:17:05.358 "state": "online", 00:17:05.358 "raid_level": "raid5f", 00:17:05.358 "superblock": true, 00:17:05.358 "num_base_bdevs": 4, 00:17:05.359 "num_base_bdevs_discovered": 3, 00:17:05.359 "num_base_bdevs_operational": 3, 00:17:05.359 "base_bdevs_list": [ 00:17:05.359 { 00:17:05.359 "name": null, 00:17:05.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.359 "is_configured": false, 00:17:05.359 "data_offset": 0, 00:17:05.359 "data_size": 63488 00:17:05.359 }, 00:17:05.359 { 00:17:05.359 "name": "BaseBdev2", 00:17:05.359 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 }, 00:17:05.359 { 00:17:05.359 "name": "BaseBdev3", 00:17:05.359 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 }, 00:17:05.359 { 00:17:05.359 "name": "BaseBdev4", 00:17:05.359 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:05.359 "is_configured": true, 00:17:05.359 "data_offset": 2048, 00:17:05.359 "data_size": 63488 00:17:05.359 } 00:17:05.359 ] 00:17:05.359 }' 00:17:05.359 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.359 16:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 16:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.925 16:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.925 16:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 [2024-11-08 16:58:35.186429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.925 [2024-11-08 16:58:35.190188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:17:05.925 [2024-11-08 16:58:35.192905] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:05.925 16:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.925 16:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.933 "name": "raid_bdev1", 00:17:06.933 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:06.933 "strip_size_kb": 64, 00:17:06.933 "state": "online", 00:17:06.933 "raid_level": "raid5f", 00:17:06.933 "superblock": true, 00:17:06.933 "num_base_bdevs": 4, 00:17:06.933 "num_base_bdevs_discovered": 4, 00:17:06.933 "num_base_bdevs_operational": 4, 00:17:06.933 "process": { 00:17:06.933 "type": "rebuild", 00:17:06.933 "target": "spare", 00:17:06.933 "progress": { 00:17:06.933 "blocks": 19200, 00:17:06.933 "percent": 10 00:17:06.933 } 00:17:06.933 }, 00:17:06.933 "base_bdevs_list": [ 00:17:06.933 { 00:17:06.933 "name": "spare", 00:17:06.933 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:06.933 "is_configured": true, 00:17:06.933 "data_offset": 2048, 00:17:06.933 "data_size": 63488 00:17:06.933 }, 00:17:06.933 { 00:17:06.933 "name": "BaseBdev2", 00:17:06.933 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:06.933 "is_configured": true, 00:17:06.933 "data_offset": 2048, 00:17:06.933 "data_size": 63488 00:17:06.933 }, 00:17:06.933 { 00:17:06.933 "name": "BaseBdev3", 00:17:06.933 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:06.933 "is_configured": true, 00:17:06.933 "data_offset": 2048, 00:17:06.933 "data_size": 63488 00:17:06.933 }, 00:17:06.933 { 00:17:06.933 "name": "BaseBdev4", 00:17:06.933 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:06.933 "is_configured": true, 00:17:06.933 "data_offset": 2048, 00:17:06.933 "data_size": 63488 00:17:06.933 } 00:17:06.933 ] 00:17:06.933 }' 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 [2024-11-08 16:58:36.352336] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.933 [2024-11-08 16:58:36.403109] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.933 [2024-11-08 16:58:36.403228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.933 [2024-11-08 16:58:36.403254] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.933 [2024-11-08 16:58:36.403274] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.933 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.934 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.192 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.192 "name": "raid_bdev1", 00:17:07.192 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:07.192 "strip_size_kb": 64, 00:17:07.192 "state": "online", 00:17:07.192 "raid_level": "raid5f", 00:17:07.192 "superblock": true, 00:17:07.192 "num_base_bdevs": 4, 00:17:07.192 "num_base_bdevs_discovered": 3, 00:17:07.192 "num_base_bdevs_operational": 3, 00:17:07.192 "base_bdevs_list": [ 00:17:07.192 { 00:17:07.192 "name": null, 00:17:07.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.192 "is_configured": false, 00:17:07.192 "data_offset": 0, 00:17:07.192 "data_size": 63488 00:17:07.192 }, 00:17:07.192 { 00:17:07.192 "name": "BaseBdev2", 00:17:07.192 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:07.192 "is_configured": true, 00:17:07.192 "data_offset": 2048, 00:17:07.192 "data_size": 63488 00:17:07.192 }, 00:17:07.192 { 00:17:07.192 "name": "BaseBdev3", 00:17:07.193 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:07.193 "is_configured": true, 00:17:07.193 "data_offset": 2048, 00:17:07.193 "data_size": 63488 00:17:07.193 }, 00:17:07.193 { 00:17:07.193 "name": "BaseBdev4", 00:17:07.193 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:07.193 "is_configured": true, 00:17:07.193 "data_offset": 2048, 00:17:07.193 "data_size": 63488 00:17:07.193 } 00:17:07.193 ] 00:17:07.193 }' 00:17:07.193 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.193 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.452 "name": "raid_bdev1", 00:17:07.452 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:07.452 "strip_size_kb": 64, 00:17:07.452 "state": "online", 00:17:07.452 "raid_level": "raid5f", 00:17:07.452 "superblock": true, 00:17:07.452 "num_base_bdevs": 4, 00:17:07.452 "num_base_bdevs_discovered": 3, 00:17:07.452 "num_base_bdevs_operational": 3, 00:17:07.452 "base_bdevs_list": [ 00:17:07.452 { 00:17:07.452 "name": null, 00:17:07.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.452 "is_configured": false, 00:17:07.452 "data_offset": 0, 00:17:07.452 "data_size": 63488 00:17:07.452 }, 00:17:07.452 { 00:17:07.452 "name": "BaseBdev2", 00:17:07.452 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:07.452 "is_configured": true, 00:17:07.452 "data_offset": 2048, 00:17:07.452 "data_size": 63488 00:17:07.452 }, 00:17:07.452 { 00:17:07.452 "name": "BaseBdev3", 00:17:07.452 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:07.452 "is_configured": true, 00:17:07.452 "data_offset": 2048, 00:17:07.452 "data_size": 63488 00:17:07.452 }, 00:17:07.452 { 00:17:07.452 "name": "BaseBdev4", 00:17:07.452 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:07.452 "is_configured": true, 00:17:07.452 "data_offset": 2048, 00:17:07.452 "data_size": 63488 00:17:07.452 } 00:17:07.452 ] 00:17:07.452 }' 00:17:07.452 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.711 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.711 16:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.711 16:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.711 16:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.711 16:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.711 16:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.711 [2024-11-08 16:58:37.036625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.712 [2024-11-08 16:58:37.040259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:17:07.712 [2024-11-08 16:58:37.043047] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.712 16:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.712 16:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.646 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.646 "name": "raid_bdev1", 00:17:08.646 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:08.646 "strip_size_kb": 64, 00:17:08.646 "state": "online", 00:17:08.646 "raid_level": "raid5f", 00:17:08.646 "superblock": true, 00:17:08.646 "num_base_bdevs": 4, 00:17:08.646 "num_base_bdevs_discovered": 4, 00:17:08.646 "num_base_bdevs_operational": 4, 00:17:08.646 "process": { 00:17:08.646 "type": "rebuild", 00:17:08.646 "target": "spare", 00:17:08.646 "progress": { 00:17:08.646 "blocks": 19200, 00:17:08.646 "percent": 10 00:17:08.646 } 00:17:08.646 }, 00:17:08.646 "base_bdevs_list": [ 00:17:08.646 { 00:17:08.646 "name": "spare", 00:17:08.646 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:08.646 "is_configured": true, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": "BaseBdev2", 00:17:08.646 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:08.646 "is_configured": true, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": "BaseBdev3", 00:17:08.646 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:08.646 "is_configured": true, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": "BaseBdev4", 00:17:08.647 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:08.647 "is_configured": true, 00:17:08.647 "data_offset": 2048, 00:17:08.647 "data_size": 63488 00:17:08.647 } 00:17:08.647 ] 00:17:08.647 }' 00:17:08.647 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.647 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.647 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:08.906 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=543 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.906 "name": "raid_bdev1", 00:17:08.906 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:08.906 "strip_size_kb": 64, 00:17:08.906 "state": "online", 00:17:08.906 "raid_level": "raid5f", 00:17:08.906 "superblock": true, 00:17:08.906 "num_base_bdevs": 4, 00:17:08.906 "num_base_bdevs_discovered": 4, 00:17:08.906 "num_base_bdevs_operational": 4, 00:17:08.906 "process": { 00:17:08.906 "type": "rebuild", 00:17:08.906 "target": "spare", 00:17:08.906 "progress": { 00:17:08.906 "blocks": 21120, 00:17:08.906 "percent": 11 00:17:08.906 } 00:17:08.906 }, 00:17:08.906 "base_bdevs_list": [ 00:17:08.906 { 00:17:08.906 "name": "spare", 00:17:08.906 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:08.906 "is_configured": true, 00:17:08.906 "data_offset": 2048, 00:17:08.906 "data_size": 63488 00:17:08.906 }, 00:17:08.906 { 00:17:08.906 "name": "BaseBdev2", 00:17:08.906 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:08.906 "is_configured": true, 00:17:08.906 "data_offset": 2048, 00:17:08.906 "data_size": 63488 00:17:08.906 }, 00:17:08.906 { 00:17:08.906 "name": "BaseBdev3", 00:17:08.906 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:08.906 "is_configured": true, 00:17:08.906 "data_offset": 2048, 00:17:08.906 "data_size": 63488 00:17:08.906 }, 00:17:08.906 { 00:17:08.906 "name": "BaseBdev4", 00:17:08.906 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:08.906 "is_configured": true, 00:17:08.906 "data_offset": 2048, 00:17:08.906 "data_size": 63488 00:17:08.906 } 00:17:08.906 ] 00:17:08.906 }' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.906 16:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.842 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.101 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.101 "name": "raid_bdev1", 00:17:10.101 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:10.101 "strip_size_kb": 64, 00:17:10.101 "state": "online", 00:17:10.101 "raid_level": "raid5f", 00:17:10.101 "superblock": true, 00:17:10.101 "num_base_bdevs": 4, 00:17:10.101 "num_base_bdevs_discovered": 4, 00:17:10.101 "num_base_bdevs_operational": 4, 00:17:10.101 "process": { 00:17:10.101 "type": "rebuild", 00:17:10.101 "target": "spare", 00:17:10.101 "progress": { 00:17:10.101 "blocks": 42240, 00:17:10.101 "percent": 22 00:17:10.101 } 00:17:10.101 }, 00:17:10.101 "base_bdevs_list": [ 00:17:10.101 { 00:17:10.101 "name": "spare", 00:17:10.101 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:10.101 "is_configured": true, 00:17:10.101 "data_offset": 2048, 00:17:10.101 "data_size": 63488 00:17:10.101 }, 00:17:10.101 { 00:17:10.101 "name": "BaseBdev2", 00:17:10.101 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:10.101 "is_configured": true, 00:17:10.101 "data_offset": 2048, 00:17:10.101 "data_size": 63488 00:17:10.101 }, 00:17:10.101 { 00:17:10.101 "name": "BaseBdev3", 00:17:10.101 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:10.101 "is_configured": true, 00:17:10.101 "data_offset": 2048, 00:17:10.101 "data_size": 63488 00:17:10.101 }, 00:17:10.101 { 00:17:10.101 "name": "BaseBdev4", 00:17:10.101 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:10.101 "is_configured": true, 00:17:10.101 "data_offset": 2048, 00:17:10.101 "data_size": 63488 00:17:10.101 } 00:17:10.101 ] 00:17:10.101 }' 00:17:10.101 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.101 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.101 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.101 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.101 16:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.039 "name": "raid_bdev1", 00:17:11.039 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:11.039 "strip_size_kb": 64, 00:17:11.039 "state": "online", 00:17:11.039 "raid_level": "raid5f", 00:17:11.039 "superblock": true, 00:17:11.039 "num_base_bdevs": 4, 00:17:11.039 "num_base_bdevs_discovered": 4, 00:17:11.039 "num_base_bdevs_operational": 4, 00:17:11.039 "process": { 00:17:11.039 "type": "rebuild", 00:17:11.039 "target": "spare", 00:17:11.039 "progress": { 00:17:11.039 "blocks": 65280, 00:17:11.039 "percent": 34 00:17:11.039 } 00:17:11.039 }, 00:17:11.039 "base_bdevs_list": [ 00:17:11.039 { 00:17:11.039 "name": "spare", 00:17:11.039 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:11.039 "is_configured": true, 00:17:11.039 "data_offset": 2048, 00:17:11.039 "data_size": 63488 00:17:11.039 }, 00:17:11.039 { 00:17:11.039 "name": "BaseBdev2", 00:17:11.039 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:11.039 "is_configured": true, 00:17:11.039 "data_offset": 2048, 00:17:11.039 "data_size": 63488 00:17:11.039 }, 00:17:11.039 { 00:17:11.039 "name": "BaseBdev3", 00:17:11.039 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:11.039 "is_configured": true, 00:17:11.039 "data_offset": 2048, 00:17:11.039 "data_size": 63488 00:17:11.039 }, 00:17:11.039 { 00:17:11.039 "name": "BaseBdev4", 00:17:11.039 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:11.039 "is_configured": true, 00:17:11.039 "data_offset": 2048, 00:17:11.039 "data_size": 63488 00:17:11.039 } 00:17:11.039 ] 00:17:11.039 }' 00:17:11.039 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.298 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.298 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.298 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.298 16:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.237 "name": "raid_bdev1", 00:17:12.237 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:12.237 "strip_size_kb": 64, 00:17:12.237 "state": "online", 00:17:12.237 "raid_level": "raid5f", 00:17:12.237 "superblock": true, 00:17:12.237 "num_base_bdevs": 4, 00:17:12.237 "num_base_bdevs_discovered": 4, 00:17:12.237 "num_base_bdevs_operational": 4, 00:17:12.237 "process": { 00:17:12.237 "type": "rebuild", 00:17:12.237 "target": "spare", 00:17:12.237 "progress": { 00:17:12.237 "blocks": 86400, 00:17:12.237 "percent": 45 00:17:12.237 } 00:17:12.237 }, 00:17:12.237 "base_bdevs_list": [ 00:17:12.237 { 00:17:12.237 "name": "spare", 00:17:12.237 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:12.237 "is_configured": true, 00:17:12.237 "data_offset": 2048, 00:17:12.237 "data_size": 63488 00:17:12.237 }, 00:17:12.237 { 00:17:12.237 "name": "BaseBdev2", 00:17:12.237 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:12.237 "is_configured": true, 00:17:12.237 "data_offset": 2048, 00:17:12.237 "data_size": 63488 00:17:12.237 }, 00:17:12.237 { 00:17:12.237 "name": "BaseBdev3", 00:17:12.237 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:12.237 "is_configured": true, 00:17:12.237 "data_offset": 2048, 00:17:12.237 "data_size": 63488 00:17:12.237 }, 00:17:12.237 { 00:17:12.237 "name": "BaseBdev4", 00:17:12.237 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:12.237 "is_configured": true, 00:17:12.237 "data_offset": 2048, 00:17:12.237 "data_size": 63488 00:17:12.237 } 00:17:12.237 ] 00:17:12.237 }' 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.237 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.496 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.496 16:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.459 "name": "raid_bdev1", 00:17:13.459 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:13.459 "strip_size_kb": 64, 00:17:13.459 "state": "online", 00:17:13.459 "raid_level": "raid5f", 00:17:13.459 "superblock": true, 00:17:13.459 "num_base_bdevs": 4, 00:17:13.459 "num_base_bdevs_discovered": 4, 00:17:13.459 "num_base_bdevs_operational": 4, 00:17:13.459 "process": { 00:17:13.459 "type": "rebuild", 00:17:13.459 "target": "spare", 00:17:13.459 "progress": { 00:17:13.459 "blocks": 109440, 00:17:13.459 "percent": 57 00:17:13.459 } 00:17:13.459 }, 00:17:13.459 "base_bdevs_list": [ 00:17:13.459 { 00:17:13.459 "name": "spare", 00:17:13.459 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:13.459 "is_configured": true, 00:17:13.459 "data_offset": 2048, 00:17:13.459 "data_size": 63488 00:17:13.459 }, 00:17:13.459 { 00:17:13.459 "name": "BaseBdev2", 00:17:13.459 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:13.459 "is_configured": true, 00:17:13.459 "data_offset": 2048, 00:17:13.459 "data_size": 63488 00:17:13.459 }, 00:17:13.459 { 00:17:13.459 "name": "BaseBdev3", 00:17:13.459 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:13.459 "is_configured": true, 00:17:13.459 "data_offset": 2048, 00:17:13.459 "data_size": 63488 00:17:13.460 }, 00:17:13.460 { 00:17:13.460 "name": "BaseBdev4", 00:17:13.460 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:13.460 "is_configured": true, 00:17:13.460 "data_offset": 2048, 00:17:13.460 "data_size": 63488 00:17:13.460 } 00:17:13.460 ] 00:17:13.460 }' 00:17:13.460 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.460 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.460 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.460 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.460 16:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.840 16:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.840 16:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.840 "name": "raid_bdev1", 00:17:14.840 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:14.840 "strip_size_kb": 64, 00:17:14.840 "state": "online", 00:17:14.840 "raid_level": "raid5f", 00:17:14.840 "superblock": true, 00:17:14.840 "num_base_bdevs": 4, 00:17:14.840 "num_base_bdevs_discovered": 4, 00:17:14.840 "num_base_bdevs_operational": 4, 00:17:14.840 "process": { 00:17:14.840 "type": "rebuild", 00:17:14.840 "target": "spare", 00:17:14.840 "progress": { 00:17:14.840 "blocks": 130560, 00:17:14.840 "percent": 68 00:17:14.840 } 00:17:14.840 }, 00:17:14.840 "base_bdevs_list": [ 00:17:14.840 { 00:17:14.840 "name": "spare", 00:17:14.840 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:14.840 "is_configured": true, 00:17:14.840 "data_offset": 2048, 00:17:14.840 "data_size": 63488 00:17:14.840 }, 00:17:14.840 { 00:17:14.840 "name": "BaseBdev2", 00:17:14.840 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:14.840 "is_configured": true, 00:17:14.840 "data_offset": 2048, 00:17:14.840 "data_size": 63488 00:17:14.840 }, 00:17:14.840 { 00:17:14.840 "name": "BaseBdev3", 00:17:14.840 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:14.840 "is_configured": true, 00:17:14.840 "data_offset": 2048, 00:17:14.840 "data_size": 63488 00:17:14.840 }, 00:17:14.840 { 00:17:14.840 "name": "BaseBdev4", 00:17:14.840 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:14.840 "is_configured": true, 00:17:14.840 "data_offset": 2048, 00:17:14.840 "data_size": 63488 00:17:14.840 } 00:17:14.840 ] 00:17:14.840 }' 00:17:14.840 16:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.840 16:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.840 16:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.840 16:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.840 16:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.777 "name": "raid_bdev1", 00:17:15.777 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:15.777 "strip_size_kb": 64, 00:17:15.777 "state": "online", 00:17:15.777 "raid_level": "raid5f", 00:17:15.777 "superblock": true, 00:17:15.777 "num_base_bdevs": 4, 00:17:15.777 "num_base_bdevs_discovered": 4, 00:17:15.777 "num_base_bdevs_operational": 4, 00:17:15.777 "process": { 00:17:15.777 "type": "rebuild", 00:17:15.777 "target": "spare", 00:17:15.777 "progress": { 00:17:15.777 "blocks": 153600, 00:17:15.777 "percent": 80 00:17:15.777 } 00:17:15.777 }, 00:17:15.777 "base_bdevs_list": [ 00:17:15.777 { 00:17:15.777 "name": "spare", 00:17:15.777 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:15.777 "is_configured": true, 00:17:15.777 "data_offset": 2048, 00:17:15.777 "data_size": 63488 00:17:15.777 }, 00:17:15.777 { 00:17:15.777 "name": "BaseBdev2", 00:17:15.777 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:15.777 "is_configured": true, 00:17:15.777 "data_offset": 2048, 00:17:15.777 "data_size": 63488 00:17:15.777 }, 00:17:15.777 { 00:17:15.777 "name": "BaseBdev3", 00:17:15.777 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:15.777 "is_configured": true, 00:17:15.777 "data_offset": 2048, 00:17:15.777 "data_size": 63488 00:17:15.777 }, 00:17:15.777 { 00:17:15.777 "name": "BaseBdev4", 00:17:15.777 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:15.777 "is_configured": true, 00:17:15.777 "data_offset": 2048, 00:17:15.777 "data_size": 63488 00:17:15.777 } 00:17:15.777 ] 00:17:15.777 }' 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.777 16:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.156 "name": "raid_bdev1", 00:17:17.156 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:17.156 "strip_size_kb": 64, 00:17:17.156 "state": "online", 00:17:17.156 "raid_level": "raid5f", 00:17:17.156 "superblock": true, 00:17:17.156 "num_base_bdevs": 4, 00:17:17.156 "num_base_bdevs_discovered": 4, 00:17:17.156 "num_base_bdevs_operational": 4, 00:17:17.156 "process": { 00:17:17.156 "type": "rebuild", 00:17:17.156 "target": "spare", 00:17:17.156 "progress": { 00:17:17.156 "blocks": 174720, 00:17:17.156 "percent": 91 00:17:17.156 } 00:17:17.156 }, 00:17:17.156 "base_bdevs_list": [ 00:17:17.156 { 00:17:17.156 "name": "spare", 00:17:17.156 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:17.156 "is_configured": true, 00:17:17.156 "data_offset": 2048, 00:17:17.156 "data_size": 63488 00:17:17.156 }, 00:17:17.156 { 00:17:17.156 "name": "BaseBdev2", 00:17:17.156 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:17.156 "is_configured": true, 00:17:17.156 "data_offset": 2048, 00:17:17.156 "data_size": 63488 00:17:17.156 }, 00:17:17.156 { 00:17:17.156 "name": "BaseBdev3", 00:17:17.156 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:17.156 "is_configured": true, 00:17:17.156 "data_offset": 2048, 00:17:17.156 "data_size": 63488 00:17:17.156 }, 00:17:17.156 { 00:17:17.156 "name": "BaseBdev4", 00:17:17.156 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:17.156 "is_configured": true, 00:17:17.156 "data_offset": 2048, 00:17:17.156 "data_size": 63488 00:17:17.156 } 00:17:17.156 ] 00:17:17.156 }' 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.156 16:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.725 [2024-11-08 16:58:47.122673] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:17.725 [2024-11-08 16:58:47.122802] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:17.725 [2024-11-08 16:58:47.122970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.985 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.985 "name": "raid_bdev1", 00:17:17.985 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:17.985 "strip_size_kb": 64, 00:17:17.985 "state": "online", 00:17:17.985 "raid_level": "raid5f", 00:17:17.985 "superblock": true, 00:17:17.985 "num_base_bdevs": 4, 00:17:17.985 "num_base_bdevs_discovered": 4, 00:17:17.986 "num_base_bdevs_operational": 4, 00:17:17.986 "base_bdevs_list": [ 00:17:17.986 { 00:17:17.986 "name": "spare", 00:17:17.986 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:17.986 "is_configured": true, 00:17:17.986 "data_offset": 2048, 00:17:17.986 "data_size": 63488 00:17:17.986 }, 00:17:17.986 { 00:17:17.986 "name": "BaseBdev2", 00:17:17.986 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:17.986 "is_configured": true, 00:17:17.986 "data_offset": 2048, 00:17:17.986 "data_size": 63488 00:17:17.986 }, 00:17:17.986 { 00:17:17.986 "name": "BaseBdev3", 00:17:17.986 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:17.986 "is_configured": true, 00:17:17.986 "data_offset": 2048, 00:17:17.986 "data_size": 63488 00:17:17.986 }, 00:17:17.986 { 00:17:17.986 "name": "BaseBdev4", 00:17:17.986 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:17.986 "is_configured": true, 00:17:17.986 "data_offset": 2048, 00:17:17.986 "data_size": 63488 00:17:17.986 } 00:17:17.986 ] 00:17:17.986 }' 00:17:17.986 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.246 "name": "raid_bdev1", 00:17:18.246 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:18.246 "strip_size_kb": 64, 00:17:18.246 "state": "online", 00:17:18.246 "raid_level": "raid5f", 00:17:18.246 "superblock": true, 00:17:18.246 "num_base_bdevs": 4, 00:17:18.246 "num_base_bdevs_discovered": 4, 00:17:18.246 "num_base_bdevs_operational": 4, 00:17:18.246 "base_bdevs_list": [ 00:17:18.246 { 00:17:18.246 "name": "spare", 00:17:18.246 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 }, 00:17:18.246 { 00:17:18.246 "name": "BaseBdev2", 00:17:18.246 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 }, 00:17:18.246 { 00:17:18.246 "name": "BaseBdev3", 00:17:18.246 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 }, 00:17:18.246 { 00:17:18.246 "name": "BaseBdev4", 00:17:18.246 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 } 00:17:18.246 ] 00:17:18.246 }' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.246 "name": "raid_bdev1", 00:17:18.246 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:18.246 "strip_size_kb": 64, 00:17:18.246 "state": "online", 00:17:18.246 "raid_level": "raid5f", 00:17:18.246 "superblock": true, 00:17:18.246 "num_base_bdevs": 4, 00:17:18.246 "num_base_bdevs_discovered": 4, 00:17:18.246 "num_base_bdevs_operational": 4, 00:17:18.246 "base_bdevs_list": [ 00:17:18.246 { 00:17:18.246 "name": "spare", 00:17:18.246 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 }, 00:17:18.246 { 00:17:18.246 "name": "BaseBdev2", 00:17:18.246 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 }, 00:17:18.246 { 00:17:18.246 "name": "BaseBdev3", 00:17:18.246 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 }, 00:17:18.246 { 00:17:18.246 "name": "BaseBdev4", 00:17:18.246 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:18.246 "is_configured": true, 00:17:18.246 "data_offset": 2048, 00:17:18.246 "data_size": 63488 00:17:18.246 } 00:17:18.246 ] 00:17:18.246 }' 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.246 16:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.815 [2024-11-08 16:58:48.163128] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.815 [2024-11-08 16:58:48.163167] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.815 [2024-11-08 16:58:48.163284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.815 [2024-11-08 16:58:48.163419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.815 [2024-11-08 16:58:48.163468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:18.815 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.075 /dev/nbd0 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.075 1+0 records in 00:17:19.075 1+0 records out 00:17:19.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402533 s, 10.2 MB/s 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.075 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:19.339 /dev/nbd1 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.339 1+0 records in 00:17:19.339 1+0 records out 00:17:19.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477155 s, 8.6 MB/s 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.339 16:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.609 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.869 [2024-11-08 16:58:49.370579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:19.869 [2024-11-08 16:58:49.370671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.869 [2024-11-08 16:58:49.370698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:19.869 [2024-11-08 16:58:49.370710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.869 [2024-11-08 16:58:49.373157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.869 [2024-11-08 16:58:49.373208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:19.869 [2024-11-08 16:58:49.373318] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:19.869 [2024-11-08 16:58:49.373394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.869 [2024-11-08 16:58:49.373554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.869 [2024-11-08 16:58:49.373701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:19.869 [2024-11-08 16:58:49.373795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:19.869 spare 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.869 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.129 [2024-11-08 16:58:49.473749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:20.129 [2024-11-08 16:58:49.473823] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:20.129 [2024-11-08 16:58:49.474216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:17:20.129 [2024-11-08 16:58:49.474809] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:20.129 [2024-11-08 16:58:49.474839] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:20.129 [2024-11-08 16:58:49.475071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.129 "name": "raid_bdev1", 00:17:20.129 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:20.129 "strip_size_kb": 64, 00:17:20.129 "state": "online", 00:17:20.129 "raid_level": "raid5f", 00:17:20.129 "superblock": true, 00:17:20.129 "num_base_bdevs": 4, 00:17:20.129 "num_base_bdevs_discovered": 4, 00:17:20.129 "num_base_bdevs_operational": 4, 00:17:20.129 "base_bdevs_list": [ 00:17:20.129 { 00:17:20.129 "name": "spare", 00:17:20.129 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:20.129 "is_configured": true, 00:17:20.129 "data_offset": 2048, 00:17:20.129 "data_size": 63488 00:17:20.129 }, 00:17:20.129 { 00:17:20.129 "name": "BaseBdev2", 00:17:20.129 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:20.129 "is_configured": true, 00:17:20.129 "data_offset": 2048, 00:17:20.129 "data_size": 63488 00:17:20.129 }, 00:17:20.129 { 00:17:20.129 "name": "BaseBdev3", 00:17:20.129 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:20.129 "is_configured": true, 00:17:20.129 "data_offset": 2048, 00:17:20.129 "data_size": 63488 00:17:20.129 }, 00:17:20.129 { 00:17:20.129 "name": "BaseBdev4", 00:17:20.129 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:20.129 "is_configured": true, 00:17:20.129 "data_offset": 2048, 00:17:20.129 "data_size": 63488 00:17:20.129 } 00:17:20.129 ] 00:17:20.129 }' 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.129 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.697 "name": "raid_bdev1", 00:17:20.697 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:20.697 "strip_size_kb": 64, 00:17:20.697 "state": "online", 00:17:20.697 "raid_level": "raid5f", 00:17:20.697 "superblock": true, 00:17:20.697 "num_base_bdevs": 4, 00:17:20.697 "num_base_bdevs_discovered": 4, 00:17:20.697 "num_base_bdevs_operational": 4, 00:17:20.697 "base_bdevs_list": [ 00:17:20.697 { 00:17:20.697 "name": "spare", 00:17:20.697 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:20.697 "is_configured": true, 00:17:20.697 "data_offset": 2048, 00:17:20.697 "data_size": 63488 00:17:20.697 }, 00:17:20.697 { 00:17:20.697 "name": "BaseBdev2", 00:17:20.697 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:20.697 "is_configured": true, 00:17:20.697 "data_offset": 2048, 00:17:20.697 "data_size": 63488 00:17:20.697 }, 00:17:20.697 { 00:17:20.697 "name": "BaseBdev3", 00:17:20.697 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:20.697 "is_configured": true, 00:17:20.697 "data_offset": 2048, 00:17:20.697 "data_size": 63488 00:17:20.697 }, 00:17:20.697 { 00:17:20.697 "name": "BaseBdev4", 00:17:20.697 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:20.697 "is_configured": true, 00:17:20.697 "data_offset": 2048, 00:17:20.697 "data_size": 63488 00:17:20.697 } 00:17:20.697 ] 00:17:20.697 }' 00:17:20.697 16:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.697 [2024-11-08 16:58:50.157976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.697 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.698 "name": "raid_bdev1", 00:17:20.698 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:20.698 "strip_size_kb": 64, 00:17:20.698 "state": "online", 00:17:20.698 "raid_level": "raid5f", 00:17:20.698 "superblock": true, 00:17:20.698 "num_base_bdevs": 4, 00:17:20.698 "num_base_bdevs_discovered": 3, 00:17:20.698 "num_base_bdevs_operational": 3, 00:17:20.698 "base_bdevs_list": [ 00:17:20.698 { 00:17:20.698 "name": null, 00:17:20.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.698 "is_configured": false, 00:17:20.698 "data_offset": 0, 00:17:20.698 "data_size": 63488 00:17:20.698 }, 00:17:20.698 { 00:17:20.698 "name": "BaseBdev2", 00:17:20.698 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:20.698 "is_configured": true, 00:17:20.698 "data_offset": 2048, 00:17:20.698 "data_size": 63488 00:17:20.698 }, 00:17:20.698 { 00:17:20.698 "name": "BaseBdev3", 00:17:20.698 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:20.698 "is_configured": true, 00:17:20.698 "data_offset": 2048, 00:17:20.698 "data_size": 63488 00:17:20.698 }, 00:17:20.698 { 00:17:20.698 "name": "BaseBdev4", 00:17:20.698 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:20.698 "is_configured": true, 00:17:20.698 "data_offset": 2048, 00:17:20.698 "data_size": 63488 00:17:20.698 } 00:17:20.698 ] 00:17:20.698 }' 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.698 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.266 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:21.266 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.266 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.266 [2024-11-08 16:58:50.629221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.266 [2024-11-08 16:58:50.629509] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.266 [2024-11-08 16:58:50.629539] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:21.266 [2024-11-08 16:58:50.629610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.266 [2024-11-08 16:58:50.633152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:17:21.266 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.266 16:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:21.266 [2024-11-08 16:58:50.635920] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.205 "name": "raid_bdev1", 00:17:22.205 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:22.205 "strip_size_kb": 64, 00:17:22.205 "state": "online", 00:17:22.205 "raid_level": "raid5f", 00:17:22.205 "superblock": true, 00:17:22.205 "num_base_bdevs": 4, 00:17:22.205 "num_base_bdevs_discovered": 4, 00:17:22.205 "num_base_bdevs_operational": 4, 00:17:22.205 "process": { 00:17:22.205 "type": "rebuild", 00:17:22.205 "target": "spare", 00:17:22.205 "progress": { 00:17:22.205 "blocks": 19200, 00:17:22.205 "percent": 10 00:17:22.205 } 00:17:22.205 }, 00:17:22.205 "base_bdevs_list": [ 00:17:22.205 { 00:17:22.205 "name": "spare", 00:17:22.205 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:22.205 "is_configured": true, 00:17:22.205 "data_offset": 2048, 00:17:22.205 "data_size": 63488 00:17:22.205 }, 00:17:22.205 { 00:17:22.205 "name": "BaseBdev2", 00:17:22.205 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:22.205 "is_configured": true, 00:17:22.205 "data_offset": 2048, 00:17:22.205 "data_size": 63488 00:17:22.205 }, 00:17:22.205 { 00:17:22.205 "name": "BaseBdev3", 00:17:22.205 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:22.205 "is_configured": true, 00:17:22.205 "data_offset": 2048, 00:17:22.205 "data_size": 63488 00:17:22.205 }, 00:17:22.205 { 00:17:22.205 "name": "BaseBdev4", 00:17:22.205 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:22.205 "is_configured": true, 00:17:22.205 "data_offset": 2048, 00:17:22.205 "data_size": 63488 00:17:22.205 } 00:17:22.205 ] 00:17:22.205 }' 00:17:22.205 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.464 [2024-11-08 16:58:51.773090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.464 [2024-11-08 16:58:51.846018] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.464 [2024-11-08 16:58:51.846125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.464 [2024-11-08 16:58:51.846153] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.464 [2024-11-08 16:58:51.846164] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.464 "name": "raid_bdev1", 00:17:22.464 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:22.464 "strip_size_kb": 64, 00:17:22.464 "state": "online", 00:17:22.464 "raid_level": "raid5f", 00:17:22.464 "superblock": true, 00:17:22.464 "num_base_bdevs": 4, 00:17:22.464 "num_base_bdevs_discovered": 3, 00:17:22.464 "num_base_bdevs_operational": 3, 00:17:22.464 "base_bdevs_list": [ 00:17:22.464 { 00:17:22.464 "name": null, 00:17:22.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.464 "is_configured": false, 00:17:22.464 "data_offset": 0, 00:17:22.464 "data_size": 63488 00:17:22.464 }, 00:17:22.464 { 00:17:22.464 "name": "BaseBdev2", 00:17:22.464 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:22.464 "is_configured": true, 00:17:22.464 "data_offset": 2048, 00:17:22.464 "data_size": 63488 00:17:22.464 }, 00:17:22.464 { 00:17:22.464 "name": "BaseBdev3", 00:17:22.464 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:22.464 "is_configured": true, 00:17:22.464 "data_offset": 2048, 00:17:22.464 "data_size": 63488 00:17:22.464 }, 00:17:22.464 { 00:17:22.464 "name": "BaseBdev4", 00:17:22.464 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:22.464 "is_configured": true, 00:17:22.464 "data_offset": 2048, 00:17:22.464 "data_size": 63488 00:17:22.464 } 00:17:22.464 ] 00:17:22.464 }' 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.464 16:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.773 16:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.773 16:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.773 16:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.773 [2024-11-08 16:58:52.299484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.773 [2024-11-08 16:58:52.299691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.773 [2024-11-08 16:58:52.299782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:22.773 [2024-11-08 16:58:52.299829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.773 [2024-11-08 16:58:52.300422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.773 [2024-11-08 16:58:52.300504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.773 [2024-11-08 16:58:52.300684] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.033 [2024-11-08 16:58:52.300740] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.033 [2024-11-08 16:58:52.300812] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:23.033 [2024-11-08 16:58:52.300852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.033 [2024-11-08 16:58:52.304440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:23.033 spare 00:17:23.033 16:58:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.033 16:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:23.033 [2024-11-08 16:58:52.307235] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.969 "name": "raid_bdev1", 00:17:23.969 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:23.969 "strip_size_kb": 64, 00:17:23.969 "state": "online", 00:17:23.969 "raid_level": "raid5f", 00:17:23.969 "superblock": true, 00:17:23.969 "num_base_bdevs": 4, 00:17:23.969 "num_base_bdevs_discovered": 4, 00:17:23.969 "num_base_bdevs_operational": 4, 00:17:23.969 "process": { 00:17:23.969 "type": "rebuild", 00:17:23.969 "target": "spare", 00:17:23.969 "progress": { 00:17:23.969 "blocks": 19200, 00:17:23.969 "percent": 10 00:17:23.969 } 00:17:23.969 }, 00:17:23.969 "base_bdevs_list": [ 00:17:23.969 { 00:17:23.969 "name": "spare", 00:17:23.969 "uuid": "c9a6c8e9-808b-542a-81e8-09a695c67046", 00:17:23.969 "is_configured": true, 00:17:23.969 "data_offset": 2048, 00:17:23.969 "data_size": 63488 00:17:23.969 }, 00:17:23.969 { 00:17:23.969 "name": "BaseBdev2", 00:17:23.969 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:23.969 "is_configured": true, 00:17:23.969 "data_offset": 2048, 00:17:23.969 "data_size": 63488 00:17:23.969 }, 00:17:23.969 { 00:17:23.969 "name": "BaseBdev3", 00:17:23.969 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:23.969 "is_configured": true, 00:17:23.969 "data_offset": 2048, 00:17:23.969 "data_size": 63488 00:17:23.969 }, 00:17:23.969 { 00:17:23.969 "name": "BaseBdev4", 00:17:23.969 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:23.969 "is_configured": true, 00:17:23.969 "data_offset": 2048, 00:17:23.969 "data_size": 63488 00:17:23.969 } 00:17:23.969 ] 00:17:23.969 }' 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.969 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.970 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.970 [2024-11-08 16:58:53.460236] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.229 [2024-11-08 16:58:53.517181] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.229 [2024-11-08 16:58:53.517288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.229 [2024-11-08 16:58:53.517311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.229 [2024-11-08 16:58:53.517324] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.229 "name": "raid_bdev1", 00:17:24.229 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:24.229 "strip_size_kb": 64, 00:17:24.229 "state": "online", 00:17:24.229 "raid_level": "raid5f", 00:17:24.229 "superblock": true, 00:17:24.229 "num_base_bdevs": 4, 00:17:24.229 "num_base_bdevs_discovered": 3, 00:17:24.229 "num_base_bdevs_operational": 3, 00:17:24.229 "base_bdevs_list": [ 00:17:24.229 { 00:17:24.229 "name": null, 00:17:24.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.229 "is_configured": false, 00:17:24.229 "data_offset": 0, 00:17:24.229 "data_size": 63488 00:17:24.229 }, 00:17:24.229 { 00:17:24.229 "name": "BaseBdev2", 00:17:24.229 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:24.229 "is_configured": true, 00:17:24.229 "data_offset": 2048, 00:17:24.229 "data_size": 63488 00:17:24.229 }, 00:17:24.229 { 00:17:24.229 "name": "BaseBdev3", 00:17:24.229 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:24.229 "is_configured": true, 00:17:24.229 "data_offset": 2048, 00:17:24.229 "data_size": 63488 00:17:24.229 }, 00:17:24.229 { 00:17:24.229 "name": "BaseBdev4", 00:17:24.229 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:24.229 "is_configured": true, 00:17:24.229 "data_offset": 2048, 00:17:24.229 "data_size": 63488 00:17:24.229 } 00:17:24.229 ] 00:17:24.229 }' 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.229 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.489 "name": "raid_bdev1", 00:17:24.489 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:24.489 "strip_size_kb": 64, 00:17:24.489 "state": "online", 00:17:24.489 "raid_level": "raid5f", 00:17:24.489 "superblock": true, 00:17:24.489 "num_base_bdevs": 4, 00:17:24.489 "num_base_bdevs_discovered": 3, 00:17:24.489 "num_base_bdevs_operational": 3, 00:17:24.489 "base_bdevs_list": [ 00:17:24.489 { 00:17:24.489 "name": null, 00:17:24.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.489 "is_configured": false, 00:17:24.489 "data_offset": 0, 00:17:24.489 "data_size": 63488 00:17:24.489 }, 00:17:24.489 { 00:17:24.489 "name": "BaseBdev2", 00:17:24.489 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:24.489 "is_configured": true, 00:17:24.489 "data_offset": 2048, 00:17:24.489 "data_size": 63488 00:17:24.489 }, 00:17:24.489 { 00:17:24.489 "name": "BaseBdev3", 00:17:24.489 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:24.489 "is_configured": true, 00:17:24.489 "data_offset": 2048, 00:17:24.489 "data_size": 63488 00:17:24.489 }, 00:17:24.489 { 00:17:24.489 "name": "BaseBdev4", 00:17:24.489 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:24.489 "is_configured": true, 00:17:24.489 "data_offset": 2048, 00:17:24.489 "data_size": 63488 00:17:24.489 } 00:17:24.489 ] 00:17:24.489 }' 00:17:24.489 16:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.748 [2024-11-08 16:58:54.089985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:24.748 [2024-11-08 16:58:54.090062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.748 [2024-11-08 16:58:54.090087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:24.748 [2024-11-08 16:58:54.090100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.748 [2024-11-08 16:58:54.090596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.748 [2024-11-08 16:58:54.090642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.748 [2024-11-08 16:58:54.090732] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:24.748 [2024-11-08 16:58:54.090753] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:24.748 [2024-11-08 16:58:54.090780] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:24.748 [2024-11-08 16:58:54.090796] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:24.748 BaseBdev1 00:17:24.748 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.749 16:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.686 "name": "raid_bdev1", 00:17:25.686 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:25.686 "strip_size_kb": 64, 00:17:25.686 "state": "online", 00:17:25.686 "raid_level": "raid5f", 00:17:25.686 "superblock": true, 00:17:25.686 "num_base_bdevs": 4, 00:17:25.686 "num_base_bdevs_discovered": 3, 00:17:25.686 "num_base_bdevs_operational": 3, 00:17:25.686 "base_bdevs_list": [ 00:17:25.686 { 00:17:25.686 "name": null, 00:17:25.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.686 "is_configured": false, 00:17:25.686 "data_offset": 0, 00:17:25.686 "data_size": 63488 00:17:25.686 }, 00:17:25.686 { 00:17:25.686 "name": "BaseBdev2", 00:17:25.686 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:25.686 "is_configured": true, 00:17:25.686 "data_offset": 2048, 00:17:25.686 "data_size": 63488 00:17:25.686 }, 00:17:25.686 { 00:17:25.686 "name": "BaseBdev3", 00:17:25.686 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:25.686 "is_configured": true, 00:17:25.686 "data_offset": 2048, 00:17:25.686 "data_size": 63488 00:17:25.686 }, 00:17:25.686 { 00:17:25.686 "name": "BaseBdev4", 00:17:25.686 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:25.686 "is_configured": true, 00:17:25.686 "data_offset": 2048, 00:17:25.686 "data_size": 63488 00:17:25.686 } 00:17:25.686 ] 00:17:25.686 }' 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.686 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.253 "name": "raid_bdev1", 00:17:26.253 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:26.253 "strip_size_kb": 64, 00:17:26.253 "state": "online", 00:17:26.253 "raid_level": "raid5f", 00:17:26.253 "superblock": true, 00:17:26.253 "num_base_bdevs": 4, 00:17:26.253 "num_base_bdevs_discovered": 3, 00:17:26.253 "num_base_bdevs_operational": 3, 00:17:26.253 "base_bdevs_list": [ 00:17:26.253 { 00:17:26.253 "name": null, 00:17:26.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.253 "is_configured": false, 00:17:26.253 "data_offset": 0, 00:17:26.253 "data_size": 63488 00:17:26.253 }, 00:17:26.253 { 00:17:26.253 "name": "BaseBdev2", 00:17:26.253 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:26.253 "is_configured": true, 00:17:26.253 "data_offset": 2048, 00:17:26.253 "data_size": 63488 00:17:26.253 }, 00:17:26.253 { 00:17:26.253 "name": "BaseBdev3", 00:17:26.253 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:26.253 "is_configured": true, 00:17:26.253 "data_offset": 2048, 00:17:26.253 "data_size": 63488 00:17:26.253 }, 00:17:26.253 { 00:17:26.253 "name": "BaseBdev4", 00:17:26.253 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:26.253 "is_configured": true, 00:17:26.253 "data_offset": 2048, 00:17:26.253 "data_size": 63488 00:17:26.253 } 00:17:26.253 ] 00:17:26.253 }' 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.253 [2024-11-08 16:58:55.747663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.253 [2024-11-08 16:58:55.747946] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:26.253 [2024-11-08 16:58:55.748016] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:26.253 request: 00:17:26.253 { 00:17:26.253 "base_bdev": "BaseBdev1", 00:17:26.253 "raid_bdev": "raid_bdev1", 00:17:26.253 "method": "bdev_raid_add_base_bdev", 00:17:26.253 "req_id": 1 00:17:26.253 } 00:17:26.253 Got JSON-RPC error response 00:17:26.253 response: 00:17:26.253 { 00:17:26.253 "code": -22, 00:17:26.253 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:26.253 } 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:26.253 16:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.628 "name": "raid_bdev1", 00:17:27.628 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:27.628 "strip_size_kb": 64, 00:17:27.628 "state": "online", 00:17:27.628 "raid_level": "raid5f", 00:17:27.628 "superblock": true, 00:17:27.628 "num_base_bdevs": 4, 00:17:27.628 "num_base_bdevs_discovered": 3, 00:17:27.628 "num_base_bdevs_operational": 3, 00:17:27.628 "base_bdevs_list": [ 00:17:27.628 { 00:17:27.628 "name": null, 00:17:27.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.628 "is_configured": false, 00:17:27.628 "data_offset": 0, 00:17:27.628 "data_size": 63488 00:17:27.628 }, 00:17:27.628 { 00:17:27.628 "name": "BaseBdev2", 00:17:27.628 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:27.628 "is_configured": true, 00:17:27.628 "data_offset": 2048, 00:17:27.628 "data_size": 63488 00:17:27.628 }, 00:17:27.628 { 00:17:27.628 "name": "BaseBdev3", 00:17:27.628 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:27.628 "is_configured": true, 00:17:27.628 "data_offset": 2048, 00:17:27.628 "data_size": 63488 00:17:27.628 }, 00:17:27.628 { 00:17:27.628 "name": "BaseBdev4", 00:17:27.628 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:27.628 "is_configured": true, 00:17:27.628 "data_offset": 2048, 00:17:27.628 "data_size": 63488 00:17:27.628 } 00:17:27.628 ] 00:17:27.628 }' 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.628 16:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.887 "name": "raid_bdev1", 00:17:27.887 "uuid": "a46ebf9b-b902-44b1-8a9a-0567c611efe7", 00:17:27.887 "strip_size_kb": 64, 00:17:27.887 "state": "online", 00:17:27.887 "raid_level": "raid5f", 00:17:27.887 "superblock": true, 00:17:27.887 "num_base_bdevs": 4, 00:17:27.887 "num_base_bdevs_discovered": 3, 00:17:27.887 "num_base_bdevs_operational": 3, 00:17:27.887 "base_bdevs_list": [ 00:17:27.887 { 00:17:27.887 "name": null, 00:17:27.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.887 "is_configured": false, 00:17:27.887 "data_offset": 0, 00:17:27.887 "data_size": 63488 00:17:27.887 }, 00:17:27.887 { 00:17:27.887 "name": "BaseBdev2", 00:17:27.887 "uuid": "f9108cf0-bfeb-577d-bb60-66b3f5aeaf7c", 00:17:27.887 "is_configured": true, 00:17:27.887 "data_offset": 2048, 00:17:27.887 "data_size": 63488 00:17:27.887 }, 00:17:27.887 { 00:17:27.887 "name": "BaseBdev3", 00:17:27.887 "uuid": "6f957778-0221-5445-b1bb-2f22b479e348", 00:17:27.887 "is_configured": true, 00:17:27.887 "data_offset": 2048, 00:17:27.887 "data_size": 63488 00:17:27.887 }, 00:17:27.887 { 00:17:27.887 "name": "BaseBdev4", 00:17:27.887 "uuid": "919f9f4c-d546-57f0-9301-fde9fae06d41", 00:17:27.887 "is_configured": true, 00:17:27.887 "data_offset": 2048, 00:17:27.887 "data_size": 63488 00:17:27.887 } 00:17:27.887 ] 00:17:27.887 }' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95637 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95637 ']' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95637 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95637 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.887 killing process with pid 95637 00:17:27.887 Received shutdown signal, test time was about 60.000000 seconds 00:17:27.887 00:17:27.887 Latency(us) 00:17:27.887 [2024-11-08T16:58:57.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.887 [2024-11-08T16:58:57.415Z] =================================================================================================================== 00:17:27.887 [2024-11-08T16:58:57.415Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95637' 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95637 00:17:27.887 [2024-11-08 16:58:57.382550] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.887 [2024-11-08 16:58:57.382720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.887 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95637 00:17:27.887 [2024-11-08 16:58:57.382811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.887 [2024-11-08 16:58:57.382823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:28.146 [2024-11-08 16:58:57.435481] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.146 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:28.146 00:17:28.146 real 0m25.705s 00:17:28.146 user 0m32.879s 00:17:28.146 sys 0m3.075s 00:17:28.146 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.146 16:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 ************************************ 00:17:28.146 END TEST raid5f_rebuild_test_sb 00:17:28.146 ************************************ 00:17:28.406 16:58:57 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:28.406 16:58:57 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:28.406 16:58:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:28.406 16:58:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.406 16:58:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.406 ************************************ 00:17:28.406 START TEST raid_state_function_test_sb_4k 00:17:28.406 ************************************ 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96435 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96435' 00:17:28.406 Process raid pid: 96435 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96435 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96435 ']' 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.406 16:58:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.406 [2024-11-08 16:58:57.847377] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:28.406 [2024-11-08 16:58:57.847526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.665 [2024-11-08 16:58:58.011412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.665 [2024-11-08 16:58:58.064301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.665 [2024-11-08 16:58:58.108190] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.665 [2024-11-08 16:58:58.108232] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.230 [2024-11-08 16:58:58.710846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.230 [2024-11-08 16:58:58.710993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.230 [2024-11-08 16:58:58.711027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.230 [2024-11-08 16:58:58.711051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.230 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.489 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.489 "name": "Existed_Raid", 00:17:29.489 "uuid": "3b6b639f-0ec0-48fd-afcb-d88195fc4e40", 00:17:29.489 "strip_size_kb": 0, 00:17:29.489 "state": "configuring", 00:17:29.489 "raid_level": "raid1", 00:17:29.489 "superblock": true, 00:17:29.489 "num_base_bdevs": 2, 00:17:29.489 "num_base_bdevs_discovered": 0, 00:17:29.489 "num_base_bdevs_operational": 2, 00:17:29.489 "base_bdevs_list": [ 00:17:29.489 { 00:17:29.489 "name": "BaseBdev1", 00:17:29.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.489 "is_configured": false, 00:17:29.489 "data_offset": 0, 00:17:29.489 "data_size": 0 00:17:29.489 }, 00:17:29.489 { 00:17:29.489 "name": "BaseBdev2", 00:17:29.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.489 "is_configured": false, 00:17:29.489 "data_offset": 0, 00:17:29.489 "data_size": 0 00:17:29.489 } 00:17:29.489 ] 00:17:29.489 }' 00:17:29.489 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.489 16:58:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.748 [2024-11-08 16:58:59.157954] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.748 [2024-11-08 16:58:59.158003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.748 [2024-11-08 16:58:59.165962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.748 [2024-11-08 16:58:59.166006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.748 [2024-11-08 16:58:59.166015] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.748 [2024-11-08 16:58:59.166024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.748 [2024-11-08 16:58:59.186762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.748 BaseBdev1 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:29.748 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.749 [ 00:17:29.749 { 00:17:29.749 "name": "BaseBdev1", 00:17:29.749 "aliases": [ 00:17:29.749 "573f1c65-beb2-41b1-a29d-3809deb0c516" 00:17:29.749 ], 00:17:29.749 "product_name": "Malloc disk", 00:17:29.749 "block_size": 4096, 00:17:29.749 "num_blocks": 8192, 00:17:29.749 "uuid": "573f1c65-beb2-41b1-a29d-3809deb0c516", 00:17:29.749 "assigned_rate_limits": { 00:17:29.749 "rw_ios_per_sec": 0, 00:17:29.749 "rw_mbytes_per_sec": 0, 00:17:29.749 "r_mbytes_per_sec": 0, 00:17:29.749 "w_mbytes_per_sec": 0 00:17:29.749 }, 00:17:29.749 "claimed": true, 00:17:29.749 "claim_type": "exclusive_write", 00:17:29.749 "zoned": false, 00:17:29.749 "supported_io_types": { 00:17:29.749 "read": true, 00:17:29.749 "write": true, 00:17:29.749 "unmap": true, 00:17:29.749 "flush": true, 00:17:29.749 "reset": true, 00:17:29.749 "nvme_admin": false, 00:17:29.749 "nvme_io": false, 00:17:29.749 "nvme_io_md": false, 00:17:29.749 "write_zeroes": true, 00:17:29.749 "zcopy": true, 00:17:29.749 "get_zone_info": false, 00:17:29.749 "zone_management": false, 00:17:29.749 "zone_append": false, 00:17:29.749 "compare": false, 00:17:29.749 "compare_and_write": false, 00:17:29.749 "abort": true, 00:17:29.749 "seek_hole": false, 00:17:29.749 "seek_data": false, 00:17:29.749 "copy": true, 00:17:29.749 "nvme_iov_md": false 00:17:29.749 }, 00:17:29.749 "memory_domains": [ 00:17:29.749 { 00:17:29.749 "dma_device_id": "system", 00:17:29.749 "dma_device_type": 1 00:17:29.749 }, 00:17:29.749 { 00:17:29.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.749 "dma_device_type": 2 00:17:29.749 } 00:17:29.749 ], 00:17:29.749 "driver_specific": {} 00:17:29.749 } 00:17:29.749 ] 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.749 "name": "Existed_Raid", 00:17:29.749 "uuid": "38b8170b-624c-4810-a65c-3bce57dda11e", 00:17:29.749 "strip_size_kb": 0, 00:17:29.749 "state": "configuring", 00:17:29.749 "raid_level": "raid1", 00:17:29.749 "superblock": true, 00:17:29.749 "num_base_bdevs": 2, 00:17:29.749 "num_base_bdevs_discovered": 1, 00:17:29.749 "num_base_bdevs_operational": 2, 00:17:29.749 "base_bdevs_list": [ 00:17:29.749 { 00:17:29.749 "name": "BaseBdev1", 00:17:29.749 "uuid": "573f1c65-beb2-41b1-a29d-3809deb0c516", 00:17:29.749 "is_configured": true, 00:17:29.749 "data_offset": 256, 00:17:29.749 "data_size": 7936 00:17:29.749 }, 00:17:29.749 { 00:17:29.749 "name": "BaseBdev2", 00:17:29.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.749 "is_configured": false, 00:17:29.749 "data_offset": 0, 00:17:29.749 "data_size": 0 00:17:29.749 } 00:17:29.749 ] 00:17:29.749 }' 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.749 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 [2024-11-08 16:58:59.694009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.321 [2024-11-08 16:58:59.694148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 [2024-11-08 16:58:59.706010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.321 [2024-11-08 16:58:59.708148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.321 [2024-11-08 16:58:59.708275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.321 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.321 "name": "Existed_Raid", 00:17:30.321 "uuid": "30dfd142-8797-4ff5-b635-9ece49cd4343", 00:17:30.321 "strip_size_kb": 0, 00:17:30.321 "state": "configuring", 00:17:30.321 "raid_level": "raid1", 00:17:30.321 "superblock": true, 00:17:30.321 "num_base_bdevs": 2, 00:17:30.321 "num_base_bdevs_discovered": 1, 00:17:30.322 "num_base_bdevs_operational": 2, 00:17:30.322 "base_bdevs_list": [ 00:17:30.322 { 00:17:30.322 "name": "BaseBdev1", 00:17:30.322 "uuid": "573f1c65-beb2-41b1-a29d-3809deb0c516", 00:17:30.322 "is_configured": true, 00:17:30.322 "data_offset": 256, 00:17:30.322 "data_size": 7936 00:17:30.322 }, 00:17:30.322 { 00:17:30.322 "name": "BaseBdev2", 00:17:30.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.322 "is_configured": false, 00:17:30.322 "data_offset": 0, 00:17:30.322 "data_size": 0 00:17:30.322 } 00:17:30.322 ] 00:17:30.322 }' 00:17:30.322 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.322 16:58:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.891 [2024-11-08 16:59:00.236258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.891 [2024-11-08 16:59:00.236620] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:30.891 [2024-11-08 16:59:00.236700] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.891 BaseBdev2 00:17:30.891 [2024-11-08 16:59:00.237081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:30.891 [2024-11-08 16:59:00.237254] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:30.891 [2024-11-08 16:59:00.237331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:17:30.891 [2024-11-08 16:59:00.237539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.891 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.891 [ 00:17:30.891 { 00:17:30.891 "name": "BaseBdev2", 00:17:30.891 "aliases": [ 00:17:30.891 "9cb32c69-1b12-4e11-93a0-71ef12e00420" 00:17:30.891 ], 00:17:30.891 "product_name": "Malloc disk", 00:17:30.891 "block_size": 4096, 00:17:30.892 "num_blocks": 8192, 00:17:30.892 "uuid": "9cb32c69-1b12-4e11-93a0-71ef12e00420", 00:17:30.892 "assigned_rate_limits": { 00:17:30.892 "rw_ios_per_sec": 0, 00:17:30.892 "rw_mbytes_per_sec": 0, 00:17:30.892 "r_mbytes_per_sec": 0, 00:17:30.892 "w_mbytes_per_sec": 0 00:17:30.892 }, 00:17:30.892 "claimed": true, 00:17:30.892 "claim_type": "exclusive_write", 00:17:30.892 "zoned": false, 00:17:30.892 "supported_io_types": { 00:17:30.892 "read": true, 00:17:30.892 "write": true, 00:17:30.892 "unmap": true, 00:17:30.892 "flush": true, 00:17:30.892 "reset": true, 00:17:30.892 "nvme_admin": false, 00:17:30.892 "nvme_io": false, 00:17:30.892 "nvme_io_md": false, 00:17:30.892 "write_zeroes": true, 00:17:30.892 "zcopy": true, 00:17:30.892 "get_zone_info": false, 00:17:30.892 "zone_management": false, 00:17:30.892 "zone_append": false, 00:17:30.892 "compare": false, 00:17:30.892 "compare_and_write": false, 00:17:30.892 "abort": true, 00:17:30.892 "seek_hole": false, 00:17:30.892 "seek_data": false, 00:17:30.892 "copy": true, 00:17:30.892 "nvme_iov_md": false 00:17:30.892 }, 00:17:30.892 "memory_domains": [ 00:17:30.892 { 00:17:30.892 "dma_device_id": "system", 00:17:30.892 "dma_device_type": 1 00:17:30.892 }, 00:17:30.892 { 00:17:30.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.892 "dma_device_type": 2 00:17:30.892 } 00:17:30.892 ], 00:17:30.892 "driver_specific": {} 00:17:30.892 } 00:17:30.892 ] 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.892 "name": "Existed_Raid", 00:17:30.892 "uuid": "30dfd142-8797-4ff5-b635-9ece49cd4343", 00:17:30.892 "strip_size_kb": 0, 00:17:30.892 "state": "online", 00:17:30.892 "raid_level": "raid1", 00:17:30.892 "superblock": true, 00:17:30.892 "num_base_bdevs": 2, 00:17:30.892 "num_base_bdevs_discovered": 2, 00:17:30.892 "num_base_bdevs_operational": 2, 00:17:30.892 "base_bdevs_list": [ 00:17:30.892 { 00:17:30.892 "name": "BaseBdev1", 00:17:30.892 "uuid": "573f1c65-beb2-41b1-a29d-3809deb0c516", 00:17:30.892 "is_configured": true, 00:17:30.892 "data_offset": 256, 00:17:30.892 "data_size": 7936 00:17:30.892 }, 00:17:30.892 { 00:17:30.892 "name": "BaseBdev2", 00:17:30.892 "uuid": "9cb32c69-1b12-4e11-93a0-71ef12e00420", 00:17:30.892 "is_configured": true, 00:17:30.892 "data_offset": 256, 00:17:30.892 "data_size": 7936 00:17:30.892 } 00:17:30.892 ] 00:17:30.892 }' 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.892 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.459 [2024-11-08 16:59:00.727850] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.459 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:31.459 "name": "Existed_Raid", 00:17:31.459 "aliases": [ 00:17:31.459 "30dfd142-8797-4ff5-b635-9ece49cd4343" 00:17:31.459 ], 00:17:31.459 "product_name": "Raid Volume", 00:17:31.459 "block_size": 4096, 00:17:31.459 "num_blocks": 7936, 00:17:31.459 "uuid": "30dfd142-8797-4ff5-b635-9ece49cd4343", 00:17:31.459 "assigned_rate_limits": { 00:17:31.459 "rw_ios_per_sec": 0, 00:17:31.459 "rw_mbytes_per_sec": 0, 00:17:31.459 "r_mbytes_per_sec": 0, 00:17:31.459 "w_mbytes_per_sec": 0 00:17:31.459 }, 00:17:31.459 "claimed": false, 00:17:31.459 "zoned": false, 00:17:31.459 "supported_io_types": { 00:17:31.459 "read": true, 00:17:31.459 "write": true, 00:17:31.459 "unmap": false, 00:17:31.459 "flush": false, 00:17:31.459 "reset": true, 00:17:31.459 "nvme_admin": false, 00:17:31.459 "nvme_io": false, 00:17:31.459 "nvme_io_md": false, 00:17:31.459 "write_zeroes": true, 00:17:31.459 "zcopy": false, 00:17:31.459 "get_zone_info": false, 00:17:31.459 "zone_management": false, 00:17:31.459 "zone_append": false, 00:17:31.459 "compare": false, 00:17:31.459 "compare_and_write": false, 00:17:31.459 "abort": false, 00:17:31.459 "seek_hole": false, 00:17:31.459 "seek_data": false, 00:17:31.459 "copy": false, 00:17:31.459 "nvme_iov_md": false 00:17:31.459 }, 00:17:31.459 "memory_domains": [ 00:17:31.459 { 00:17:31.459 "dma_device_id": "system", 00:17:31.459 "dma_device_type": 1 00:17:31.459 }, 00:17:31.459 { 00:17:31.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.460 "dma_device_type": 2 00:17:31.460 }, 00:17:31.460 { 00:17:31.460 "dma_device_id": "system", 00:17:31.460 "dma_device_type": 1 00:17:31.460 }, 00:17:31.460 { 00:17:31.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.460 "dma_device_type": 2 00:17:31.460 } 00:17:31.460 ], 00:17:31.460 "driver_specific": { 00:17:31.460 "raid": { 00:17:31.460 "uuid": "30dfd142-8797-4ff5-b635-9ece49cd4343", 00:17:31.460 "strip_size_kb": 0, 00:17:31.460 "state": "online", 00:17:31.460 "raid_level": "raid1", 00:17:31.460 "superblock": true, 00:17:31.460 "num_base_bdevs": 2, 00:17:31.460 "num_base_bdevs_discovered": 2, 00:17:31.460 "num_base_bdevs_operational": 2, 00:17:31.460 "base_bdevs_list": [ 00:17:31.460 { 00:17:31.460 "name": "BaseBdev1", 00:17:31.460 "uuid": "573f1c65-beb2-41b1-a29d-3809deb0c516", 00:17:31.460 "is_configured": true, 00:17:31.460 "data_offset": 256, 00:17:31.460 "data_size": 7936 00:17:31.460 }, 00:17:31.460 { 00:17:31.460 "name": "BaseBdev2", 00:17:31.460 "uuid": "9cb32c69-1b12-4e11-93a0-71ef12e00420", 00:17:31.460 "is_configured": true, 00:17:31.460 "data_offset": 256, 00:17:31.460 "data_size": 7936 00:17:31.460 } 00:17:31.460 ] 00:17:31.460 } 00:17:31.460 } 00:17:31.460 }' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:31.460 BaseBdev2' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 [2024-11-08 16:59:00.927296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.719 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.719 "name": "Existed_Raid", 00:17:31.719 "uuid": "30dfd142-8797-4ff5-b635-9ece49cd4343", 00:17:31.719 "strip_size_kb": 0, 00:17:31.719 "state": "online", 00:17:31.719 "raid_level": "raid1", 00:17:31.719 "superblock": true, 00:17:31.719 "num_base_bdevs": 2, 00:17:31.719 "num_base_bdevs_discovered": 1, 00:17:31.719 "num_base_bdevs_operational": 1, 00:17:31.719 "base_bdevs_list": [ 00:17:31.719 { 00:17:31.719 "name": null, 00:17:31.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.719 "is_configured": false, 00:17:31.719 "data_offset": 0, 00:17:31.719 "data_size": 7936 00:17:31.719 }, 00:17:31.719 { 00:17:31.719 "name": "BaseBdev2", 00:17:31.719 "uuid": "9cb32c69-1b12-4e11-93a0-71ef12e00420", 00:17:31.719 "is_configured": true, 00:17:31.719 "data_offset": 256, 00:17:31.719 "data_size": 7936 00:17:31.719 } 00:17:31.719 ] 00:17:31.719 }' 00:17:31.719 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.719 16:59:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.978 [2024-11-08 16:59:01.434388] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.978 [2024-11-08 16:59:01.434584] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.978 [2024-11-08 16:59:01.446827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.978 [2024-11-08 16:59:01.446969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.978 [2024-11-08 16:59:01.447020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96435 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96435 ']' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96435 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.978 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96435 00:17:32.238 killing process with pid 96435 00:17:32.238 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.238 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.238 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96435' 00:17:32.238 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96435 00:17:32.238 [2024-11-08 16:59:01.527551] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.238 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96435 00:17:32.238 [2024-11-08 16:59:01.528609] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.496 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:32.496 00:17:32.496 real 0m4.015s 00:17:32.496 user 0m6.315s 00:17:32.496 sys 0m0.848s 00:17:32.496 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.496 16:59:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 ************************************ 00:17:32.496 END TEST raid_state_function_test_sb_4k 00:17:32.496 ************************************ 00:17:32.496 16:59:01 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:32.496 16:59:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:32.496 16:59:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.496 16:59:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 ************************************ 00:17:32.496 START TEST raid_superblock_test_4k 00:17:32.496 ************************************ 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96676 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96676 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96676 ']' 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.496 16:59:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.497 [2024-11-08 16:59:01.923761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:32.497 [2024-11-08 16:59:01.923984] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96676 ] 00:17:32.755 [2024-11-08 16:59:02.085888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.755 [2024-11-08 16:59:02.134741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.755 [2024-11-08 16:59:02.179627] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.755 [2024-11-08 16:59:02.179775] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.323 malloc1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.323 [2024-11-08 16:59:02.788159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.323 [2024-11-08 16:59:02.788291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.323 [2024-11-08 16:59:02.788361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.323 [2024-11-08 16:59:02.788416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.323 [2024-11-08 16:59:02.790681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.323 [2024-11-08 16:59:02.790783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.323 pt1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.323 malloc2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.323 [2024-11-08 16:59:02.828025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.323 [2024-11-08 16:59:02.828168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.323 [2024-11-08 16:59:02.828201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.323 [2024-11-08 16:59:02.828221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.323 [2024-11-08 16:59:02.831170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.323 [2024-11-08 16:59:02.831290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.323 pt2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.323 [2024-11-08 16:59:02.840163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.323 [2024-11-08 16:59:02.842129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.323 [2024-11-08 16:59:02.842304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:33.323 [2024-11-08 16:59:02.842322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.323 [2024-11-08 16:59:02.842638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:33.323 [2024-11-08 16:59:02.842808] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:33.323 [2024-11-08 16:59:02.842830] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:33.323 [2024-11-08 16:59:02.842984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.323 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.582 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.582 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.582 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.582 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.582 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.582 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.582 "name": "raid_bdev1", 00:17:33.582 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:33.582 "strip_size_kb": 0, 00:17:33.583 "state": "online", 00:17:33.583 "raid_level": "raid1", 00:17:33.583 "superblock": true, 00:17:33.583 "num_base_bdevs": 2, 00:17:33.583 "num_base_bdevs_discovered": 2, 00:17:33.583 "num_base_bdevs_operational": 2, 00:17:33.583 "base_bdevs_list": [ 00:17:33.583 { 00:17:33.583 "name": "pt1", 00:17:33.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.583 "is_configured": true, 00:17:33.583 "data_offset": 256, 00:17:33.583 "data_size": 7936 00:17:33.583 }, 00:17:33.583 { 00:17:33.583 "name": "pt2", 00:17:33.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.583 "is_configured": true, 00:17:33.583 "data_offset": 256, 00:17:33.583 "data_size": 7936 00:17:33.583 } 00:17:33.583 ] 00:17:33.583 }' 00:17:33.583 16:59:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.583 16:59:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.841 [2024-11-08 16:59:03.263850] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.841 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.841 "name": "raid_bdev1", 00:17:33.841 "aliases": [ 00:17:33.841 "a4743ad7-54ad-4784-a7dd-8fd25bff0a46" 00:17:33.841 ], 00:17:33.841 "product_name": "Raid Volume", 00:17:33.841 "block_size": 4096, 00:17:33.841 "num_blocks": 7936, 00:17:33.841 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:33.841 "assigned_rate_limits": { 00:17:33.841 "rw_ios_per_sec": 0, 00:17:33.841 "rw_mbytes_per_sec": 0, 00:17:33.841 "r_mbytes_per_sec": 0, 00:17:33.841 "w_mbytes_per_sec": 0 00:17:33.841 }, 00:17:33.841 "claimed": false, 00:17:33.841 "zoned": false, 00:17:33.841 "supported_io_types": { 00:17:33.841 "read": true, 00:17:33.841 "write": true, 00:17:33.841 "unmap": false, 00:17:33.841 "flush": false, 00:17:33.841 "reset": true, 00:17:33.841 "nvme_admin": false, 00:17:33.841 "nvme_io": false, 00:17:33.841 "nvme_io_md": false, 00:17:33.841 "write_zeroes": true, 00:17:33.841 "zcopy": false, 00:17:33.841 "get_zone_info": false, 00:17:33.841 "zone_management": false, 00:17:33.841 "zone_append": false, 00:17:33.841 "compare": false, 00:17:33.841 "compare_and_write": false, 00:17:33.841 "abort": false, 00:17:33.841 "seek_hole": false, 00:17:33.841 "seek_data": false, 00:17:33.841 "copy": false, 00:17:33.841 "nvme_iov_md": false 00:17:33.841 }, 00:17:33.841 "memory_domains": [ 00:17:33.841 { 00:17:33.841 "dma_device_id": "system", 00:17:33.841 "dma_device_type": 1 00:17:33.841 }, 00:17:33.841 { 00:17:33.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.841 "dma_device_type": 2 00:17:33.841 }, 00:17:33.841 { 00:17:33.841 "dma_device_id": "system", 00:17:33.841 "dma_device_type": 1 00:17:33.841 }, 00:17:33.841 { 00:17:33.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.841 "dma_device_type": 2 00:17:33.841 } 00:17:33.841 ], 00:17:33.842 "driver_specific": { 00:17:33.842 "raid": { 00:17:33.842 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:33.842 "strip_size_kb": 0, 00:17:33.842 "state": "online", 00:17:33.842 "raid_level": "raid1", 00:17:33.842 "superblock": true, 00:17:33.842 "num_base_bdevs": 2, 00:17:33.842 "num_base_bdevs_discovered": 2, 00:17:33.842 "num_base_bdevs_operational": 2, 00:17:33.842 "base_bdevs_list": [ 00:17:33.842 { 00:17:33.842 "name": "pt1", 00:17:33.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.842 "is_configured": true, 00:17:33.842 "data_offset": 256, 00:17:33.842 "data_size": 7936 00:17:33.842 }, 00:17:33.842 { 00:17:33.842 "name": "pt2", 00:17:33.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.842 "is_configured": true, 00:17:33.842 "data_offset": 256, 00:17:33.842 "data_size": 7936 00:17:33.842 } 00:17:33.842 ] 00:17:33.842 } 00:17:33.842 } 00:17:33.842 }' 00:17:33.842 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.842 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:33.842 pt2' 00:17:33.842 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.101 [2024-11-08 16:59:03.515266] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a4743ad7-54ad-4784-a7dd-8fd25bff0a46 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z a4743ad7-54ad-4784-a7dd-8fd25bff0a46 ']' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.101 [2024-11-08 16:59:03.566895] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.101 [2024-11-08 16:59:03.566930] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.101 [2024-11-08 16:59:03.567039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.101 [2024-11-08 16:59:03.567121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.101 [2024-11-08 16:59:03.567133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.101 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.361 [2024-11-08 16:59:03.678777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.361 [2024-11-08 16:59:03.680868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.361 [2024-11-08 16:59:03.680978] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:34.361 [2024-11-08 16:59:03.681037] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:34.361 [2024-11-08 16:59:03.681058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.361 [2024-11-08 16:59:03.681069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:17:34.361 request: 00:17:34.361 { 00:17:34.361 "name": "raid_bdev1", 00:17:34.361 "raid_level": "raid1", 00:17:34.361 "base_bdevs": [ 00:17:34.361 "malloc1", 00:17:34.361 "malloc2" 00:17:34.361 ], 00:17:34.361 "superblock": false, 00:17:34.361 "method": "bdev_raid_create", 00:17:34.361 "req_id": 1 00:17:34.361 } 00:17:34.361 Got JSON-RPC error response 00:17:34.361 response: 00:17:34.361 { 00:17:34.361 "code": -17, 00:17:34.361 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.361 } 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.361 [2024-11-08 16:59:03.738615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.361 [2024-11-08 16:59:03.738778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.361 [2024-11-08 16:59:03.738831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:34.361 [2024-11-08 16:59:03.738878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.361 [2024-11-08 16:59:03.741388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.361 [2024-11-08 16:59:03.741481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.361 [2024-11-08 16:59:03.741618] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:34.361 [2024-11-08 16:59:03.741723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.361 pt1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.361 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.361 "name": "raid_bdev1", 00:17:34.361 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:34.361 "strip_size_kb": 0, 00:17:34.361 "state": "configuring", 00:17:34.361 "raid_level": "raid1", 00:17:34.361 "superblock": true, 00:17:34.361 "num_base_bdevs": 2, 00:17:34.361 "num_base_bdevs_discovered": 1, 00:17:34.361 "num_base_bdevs_operational": 2, 00:17:34.361 "base_bdevs_list": [ 00:17:34.361 { 00:17:34.361 "name": "pt1", 00:17:34.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.361 "is_configured": true, 00:17:34.361 "data_offset": 256, 00:17:34.361 "data_size": 7936 00:17:34.361 }, 00:17:34.361 { 00:17:34.361 "name": null, 00:17:34.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.361 "is_configured": false, 00:17:34.361 "data_offset": 256, 00:17:34.362 "data_size": 7936 00:17:34.362 } 00:17:34.362 ] 00:17:34.362 }' 00:17:34.362 16:59:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.362 16:59:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.929 [2024-11-08 16:59:04.193899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.929 [2024-11-08 16:59:04.194085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.929 [2024-11-08 16:59:04.194127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:34.929 [2024-11-08 16:59:04.194141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.929 [2024-11-08 16:59:04.194705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.929 [2024-11-08 16:59:04.194733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.929 [2024-11-08 16:59:04.194842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.929 [2024-11-08 16:59:04.194872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.929 [2024-11-08 16:59:04.195003] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:34.929 [2024-11-08 16:59:04.195024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.929 [2024-11-08 16:59:04.195333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:34.929 [2024-11-08 16:59:04.195497] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:34.929 [2024-11-08 16:59:04.195526] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:17:34.929 [2024-11-08 16:59:04.195689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.929 pt2 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.929 "name": "raid_bdev1", 00:17:34.929 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:34.929 "strip_size_kb": 0, 00:17:34.929 "state": "online", 00:17:34.929 "raid_level": "raid1", 00:17:34.929 "superblock": true, 00:17:34.929 "num_base_bdevs": 2, 00:17:34.929 "num_base_bdevs_discovered": 2, 00:17:34.929 "num_base_bdevs_operational": 2, 00:17:34.929 "base_bdevs_list": [ 00:17:34.929 { 00:17:34.929 "name": "pt1", 00:17:34.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.929 "is_configured": true, 00:17:34.929 "data_offset": 256, 00:17:34.929 "data_size": 7936 00:17:34.929 }, 00:17:34.929 { 00:17:34.929 "name": "pt2", 00:17:34.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.929 "is_configured": true, 00:17:34.929 "data_offset": 256, 00:17:34.929 "data_size": 7936 00:17:34.929 } 00:17:34.929 ] 00:17:34.929 }' 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.929 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.188 [2024-11-08 16:59:04.629427] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:35.188 "name": "raid_bdev1", 00:17:35.188 "aliases": [ 00:17:35.188 "a4743ad7-54ad-4784-a7dd-8fd25bff0a46" 00:17:35.188 ], 00:17:35.188 "product_name": "Raid Volume", 00:17:35.188 "block_size": 4096, 00:17:35.188 "num_blocks": 7936, 00:17:35.188 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:35.188 "assigned_rate_limits": { 00:17:35.188 "rw_ios_per_sec": 0, 00:17:35.188 "rw_mbytes_per_sec": 0, 00:17:35.188 "r_mbytes_per_sec": 0, 00:17:35.188 "w_mbytes_per_sec": 0 00:17:35.188 }, 00:17:35.188 "claimed": false, 00:17:35.188 "zoned": false, 00:17:35.188 "supported_io_types": { 00:17:35.188 "read": true, 00:17:35.188 "write": true, 00:17:35.188 "unmap": false, 00:17:35.188 "flush": false, 00:17:35.188 "reset": true, 00:17:35.188 "nvme_admin": false, 00:17:35.188 "nvme_io": false, 00:17:35.188 "nvme_io_md": false, 00:17:35.188 "write_zeroes": true, 00:17:35.188 "zcopy": false, 00:17:35.188 "get_zone_info": false, 00:17:35.188 "zone_management": false, 00:17:35.188 "zone_append": false, 00:17:35.188 "compare": false, 00:17:35.188 "compare_and_write": false, 00:17:35.188 "abort": false, 00:17:35.188 "seek_hole": false, 00:17:35.188 "seek_data": false, 00:17:35.188 "copy": false, 00:17:35.188 "nvme_iov_md": false 00:17:35.188 }, 00:17:35.188 "memory_domains": [ 00:17:35.188 { 00:17:35.188 "dma_device_id": "system", 00:17:35.188 "dma_device_type": 1 00:17:35.188 }, 00:17:35.188 { 00:17:35.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.188 "dma_device_type": 2 00:17:35.188 }, 00:17:35.188 { 00:17:35.188 "dma_device_id": "system", 00:17:35.188 "dma_device_type": 1 00:17:35.188 }, 00:17:35.188 { 00:17:35.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.188 "dma_device_type": 2 00:17:35.188 } 00:17:35.188 ], 00:17:35.188 "driver_specific": { 00:17:35.188 "raid": { 00:17:35.188 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:35.188 "strip_size_kb": 0, 00:17:35.188 "state": "online", 00:17:35.188 "raid_level": "raid1", 00:17:35.188 "superblock": true, 00:17:35.188 "num_base_bdevs": 2, 00:17:35.188 "num_base_bdevs_discovered": 2, 00:17:35.188 "num_base_bdevs_operational": 2, 00:17:35.188 "base_bdevs_list": [ 00:17:35.188 { 00:17:35.188 "name": "pt1", 00:17:35.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.188 "is_configured": true, 00:17:35.188 "data_offset": 256, 00:17:35.188 "data_size": 7936 00:17:35.188 }, 00:17:35.188 { 00:17:35.188 "name": "pt2", 00:17:35.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.188 "is_configured": true, 00:17:35.188 "data_offset": 256, 00:17:35.188 "data_size": 7936 00:17:35.188 } 00:17:35.188 ] 00:17:35.188 } 00:17:35.188 } 00:17:35.188 }' 00:17:35.188 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:35.447 pt2' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.447 [2024-11-08 16:59:04.861011] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' a4743ad7-54ad-4784-a7dd-8fd25bff0a46 '!=' a4743ad7-54ad-4784-a7dd-8fd25bff0a46 ']' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.447 [2024-11-08 16:59:04.908697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.447 "name": "raid_bdev1", 00:17:35.447 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:35.447 "strip_size_kb": 0, 00:17:35.447 "state": "online", 00:17:35.447 "raid_level": "raid1", 00:17:35.447 "superblock": true, 00:17:35.447 "num_base_bdevs": 2, 00:17:35.447 "num_base_bdevs_discovered": 1, 00:17:35.447 "num_base_bdevs_operational": 1, 00:17:35.447 "base_bdevs_list": [ 00:17:35.447 { 00:17:35.447 "name": null, 00:17:35.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.447 "is_configured": false, 00:17:35.447 "data_offset": 0, 00:17:35.447 "data_size": 7936 00:17:35.447 }, 00:17:35.447 { 00:17:35.447 "name": "pt2", 00:17:35.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.447 "is_configured": true, 00:17:35.447 "data_offset": 256, 00:17:35.447 "data_size": 7936 00:17:35.447 } 00:17:35.447 ] 00:17:35.447 }' 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.447 16:59:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.015 [2024-11-08 16:59:05.323866] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.015 [2024-11-08 16:59:05.323974] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.015 [2024-11-08 16:59:05.324119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.015 [2024-11-08 16:59:05.324203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.015 [2024-11-08 16:59:05.324267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.015 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.016 [2024-11-08 16:59:05.395783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.016 [2024-11-08 16:59:05.395908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.016 [2024-11-08 16:59:05.395939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:36.016 [2024-11-08 16:59:05.395952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.016 [2024-11-08 16:59:05.398276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.016 [2024-11-08 16:59:05.398320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.016 [2024-11-08 16:59:05.398420] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:36.016 [2024-11-08 16:59:05.398460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.016 [2024-11-08 16:59:05.398548] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:36.016 [2024-11-08 16:59:05.398558] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.016 [2024-11-08 16:59:05.398813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:36.016 [2024-11-08 16:59:05.398961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:36.016 [2024-11-08 16:59:05.398981] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:17:36.016 [2024-11-08 16:59:05.399112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.016 pt2 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.016 "name": "raid_bdev1", 00:17:36.016 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:36.016 "strip_size_kb": 0, 00:17:36.016 "state": "online", 00:17:36.016 "raid_level": "raid1", 00:17:36.016 "superblock": true, 00:17:36.016 "num_base_bdevs": 2, 00:17:36.016 "num_base_bdevs_discovered": 1, 00:17:36.016 "num_base_bdevs_operational": 1, 00:17:36.016 "base_bdevs_list": [ 00:17:36.016 { 00:17:36.016 "name": null, 00:17:36.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.016 "is_configured": false, 00:17:36.016 "data_offset": 256, 00:17:36.016 "data_size": 7936 00:17:36.016 }, 00:17:36.016 { 00:17:36.016 "name": "pt2", 00:17:36.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.016 "is_configured": true, 00:17:36.016 "data_offset": 256, 00:17:36.016 "data_size": 7936 00:17:36.016 } 00:17:36.016 ] 00:17:36.016 }' 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.016 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.584 [2024-11-08 16:59:05.855026] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.584 [2024-11-08 16:59:05.855062] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.584 [2024-11-08 16:59:05.855171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.584 [2024-11-08 16:59:05.855225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.584 [2024-11-08 16:59:05.855238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.584 [2024-11-08 16:59:05.906900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.584 [2024-11-08 16:59:05.906986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.584 [2024-11-08 16:59:05.907013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:36.584 [2024-11-08 16:59:05.907033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.584 [2024-11-08 16:59:05.909260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.584 [2024-11-08 16:59:05.909306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.584 [2024-11-08 16:59:05.909411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:36.584 [2024-11-08 16:59:05.909458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.584 [2024-11-08 16:59:05.909568] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:36.584 [2024-11-08 16:59:05.909584] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.584 [2024-11-08 16:59:05.909604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:17:36.584 [2024-11-08 16:59:05.909643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.584 [2024-11-08 16:59:05.909752] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:36.584 [2024-11-08 16:59:05.909764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.584 [2024-11-08 16:59:05.910007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:36.584 [2024-11-08 16:59:05.910151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:36.584 [2024-11-08 16:59:05.910168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:36.584 [2024-11-08 16:59:05.910301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.584 pt1 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.584 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.585 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.585 "name": "raid_bdev1", 00:17:36.585 "uuid": "a4743ad7-54ad-4784-a7dd-8fd25bff0a46", 00:17:36.585 "strip_size_kb": 0, 00:17:36.585 "state": "online", 00:17:36.585 "raid_level": "raid1", 00:17:36.585 "superblock": true, 00:17:36.585 "num_base_bdevs": 2, 00:17:36.585 "num_base_bdevs_discovered": 1, 00:17:36.585 "num_base_bdevs_operational": 1, 00:17:36.585 "base_bdevs_list": [ 00:17:36.585 { 00:17:36.585 "name": null, 00:17:36.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.585 "is_configured": false, 00:17:36.585 "data_offset": 256, 00:17:36.585 "data_size": 7936 00:17:36.585 }, 00:17:36.585 { 00:17:36.585 "name": "pt2", 00:17:36.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.585 "is_configured": true, 00:17:36.585 "data_offset": 256, 00:17:36.585 "data_size": 7936 00:17:36.585 } 00:17:36.585 ] 00:17:36.585 }' 00:17:36.585 16:59:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.585 16:59:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 [2024-11-08 16:59:06.354502] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.843 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' a4743ad7-54ad-4784-a7dd-8fd25bff0a46 '!=' a4743ad7-54ad-4784-a7dd-8fd25bff0a46 ']' 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96676 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96676 ']' 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96676 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96676 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.101 killing process with pid 96676 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96676' 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96676 00:17:37.101 [2024-11-08 16:59:06.415260] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.101 [2024-11-08 16:59:06.415392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.101 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96676 00:17:37.101 [2024-11-08 16:59:06.415460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.101 [2024-11-08 16:59:06.415472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:37.101 [2024-11-08 16:59:06.439597] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.360 16:59:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:37.360 00:17:37.360 real 0m4.881s 00:17:37.360 user 0m7.909s 00:17:37.360 sys 0m1.040s 00:17:37.360 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.360 16:59:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 ************************************ 00:17:37.360 END TEST raid_superblock_test_4k 00:17:37.360 ************************************ 00:17:37.360 16:59:06 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:37.360 16:59:06 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:37.360 16:59:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:37.360 16:59:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.360 16:59:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 ************************************ 00:17:37.360 START TEST raid_rebuild_test_sb_4k 00:17:37.360 ************************************ 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96988 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96988 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96988 ']' 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.360 16:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.360 Zero copy mechanism will not be used. 00:17:37.360 [2024-11-08 16:59:06.879823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:37.360 [2024-11-08 16:59:06.879958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96988 ] 00:17:37.618 [2024-11-08 16:59:07.042597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.618 [2024-11-08 16:59:07.094206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.618 [2024-11-08 16:59:07.138829] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.618 [2024-11-08 16:59:07.138875] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 BaseBdev1_malloc 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 [2024-11-08 16:59:07.755075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.554 [2024-11-08 16:59:07.755163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.554 [2024-11-08 16:59:07.755193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:38.554 [2024-11-08 16:59:07.755211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.554 [2024-11-08 16:59:07.757585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.554 [2024-11-08 16:59:07.757627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.554 BaseBdev1 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 BaseBdev2_malloc 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 [2024-11-08 16:59:07.791561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:38.554 [2024-11-08 16:59:07.791652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.554 [2024-11-08 16:59:07.791680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:38.554 [2024-11-08 16:59:07.791693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.554 [2024-11-08 16:59:07.794017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.554 [2024-11-08 16:59:07.794063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.554 BaseBdev2 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 spare_malloc 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 spare_delay 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.554 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.554 [2024-11-08 16:59:07.832492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:38.554 [2024-11-08 16:59:07.832562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.555 [2024-11-08 16:59:07.832588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:38.555 [2024-11-08 16:59:07.832598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.555 [2024-11-08 16:59:07.834831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.555 [2024-11-08 16:59:07.834872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:38.555 spare 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.555 [2024-11-08 16:59:07.844522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.555 [2024-11-08 16:59:07.846434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.555 [2024-11-08 16:59:07.846600] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:38.555 [2024-11-08 16:59:07.846614] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:38.555 [2024-11-08 16:59:07.846908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:38.555 [2024-11-08 16:59:07.847071] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:38.555 [2024-11-08 16:59:07.847099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:38.555 [2024-11-08 16:59:07.847231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.555 "name": "raid_bdev1", 00:17:38.555 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:38.555 "strip_size_kb": 0, 00:17:38.555 "state": "online", 00:17:38.555 "raid_level": "raid1", 00:17:38.555 "superblock": true, 00:17:38.555 "num_base_bdevs": 2, 00:17:38.555 "num_base_bdevs_discovered": 2, 00:17:38.555 "num_base_bdevs_operational": 2, 00:17:38.555 "base_bdevs_list": [ 00:17:38.555 { 00:17:38.555 "name": "BaseBdev1", 00:17:38.555 "uuid": "2dc5dfbc-65f5-5303-a916-7a3bc7ae8bf0", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 256, 00:17:38.555 "data_size": 7936 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "name": "BaseBdev2", 00:17:38.555 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 256, 00:17:38.555 "data_size": 7936 00:17:38.555 } 00:17:38.555 ] 00:17:38.555 }' 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.555 16:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 [2024-11-08 16:59:08.272174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.814 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:39.073 [2024-11-08 16:59:08.527504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:39.073 /dev/nbd0 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.073 1+0 records in 00:17:39.073 1+0 records out 00:17:39.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272379 s, 15.0 MB/s 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:39.073 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:39.332 16:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:39.900 7936+0 records in 00:17:39.900 7936+0 records out 00:17:39.900 32505856 bytes (33 MB, 31 MiB) copied, 0.66324 s, 49.0 MB/s 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.900 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.159 [2024-11-08 16:59:09.490845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.159 [2024-11-08 16:59:09.506950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.159 "name": "raid_bdev1", 00:17:40.159 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:40.159 "strip_size_kb": 0, 00:17:40.159 "state": "online", 00:17:40.159 "raid_level": "raid1", 00:17:40.159 "superblock": true, 00:17:40.159 "num_base_bdevs": 2, 00:17:40.159 "num_base_bdevs_discovered": 1, 00:17:40.159 "num_base_bdevs_operational": 1, 00:17:40.159 "base_bdevs_list": [ 00:17:40.159 { 00:17:40.159 "name": null, 00:17:40.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.159 "is_configured": false, 00:17:40.159 "data_offset": 0, 00:17:40.159 "data_size": 7936 00:17:40.159 }, 00:17:40.159 { 00:17:40.159 "name": "BaseBdev2", 00:17:40.159 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:40.159 "is_configured": true, 00:17:40.159 "data_offset": 256, 00:17:40.159 "data_size": 7936 00:17:40.159 } 00:17:40.159 ] 00:17:40.159 }' 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.159 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.418 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.418 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.418 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.418 [2024-11-08 16:59:09.918283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.418 [2024-11-08 16:59:09.922874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:17:40.418 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.418 16:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.418 [2024-11-08 16:59:09.925038] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.797 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.797 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.797 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.797 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.798 "name": "raid_bdev1", 00:17:41.798 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:41.798 "strip_size_kb": 0, 00:17:41.798 "state": "online", 00:17:41.798 "raid_level": "raid1", 00:17:41.798 "superblock": true, 00:17:41.798 "num_base_bdevs": 2, 00:17:41.798 "num_base_bdevs_discovered": 2, 00:17:41.798 "num_base_bdevs_operational": 2, 00:17:41.798 "process": { 00:17:41.798 "type": "rebuild", 00:17:41.798 "target": "spare", 00:17:41.798 "progress": { 00:17:41.798 "blocks": 2560, 00:17:41.798 "percent": 32 00:17:41.798 } 00:17:41.798 }, 00:17:41.798 "base_bdevs_list": [ 00:17:41.798 { 00:17:41.798 "name": "spare", 00:17:41.798 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:41.798 "is_configured": true, 00:17:41.798 "data_offset": 256, 00:17:41.798 "data_size": 7936 00:17:41.798 }, 00:17:41.798 { 00:17:41.798 "name": "BaseBdev2", 00:17:41.798 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:41.798 "is_configured": true, 00:17:41.798 "data_offset": 256, 00:17:41.798 "data_size": 7936 00:17:41.798 } 00:17:41.798 ] 00:17:41.798 }' 00:17:41.798 16:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.798 [2024-11-08 16:59:11.086098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.798 [2024-11-08 16:59:11.132082] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.798 [2024-11-08 16:59:11.132170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.798 [2024-11-08 16:59:11.132196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.798 [2024-11-08 16:59:11.132207] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.798 "name": "raid_bdev1", 00:17:41.798 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:41.798 "strip_size_kb": 0, 00:17:41.798 "state": "online", 00:17:41.798 "raid_level": "raid1", 00:17:41.798 "superblock": true, 00:17:41.798 "num_base_bdevs": 2, 00:17:41.798 "num_base_bdevs_discovered": 1, 00:17:41.798 "num_base_bdevs_operational": 1, 00:17:41.798 "base_bdevs_list": [ 00:17:41.798 { 00:17:41.798 "name": null, 00:17:41.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.798 "is_configured": false, 00:17:41.798 "data_offset": 0, 00:17:41.798 "data_size": 7936 00:17:41.798 }, 00:17:41.798 { 00:17:41.798 "name": "BaseBdev2", 00:17:41.798 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:41.798 "is_configured": true, 00:17:41.798 "data_offset": 256, 00:17:41.798 "data_size": 7936 00:17:41.798 } 00:17:41.798 ] 00:17:41.798 }' 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.798 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.374 "name": "raid_bdev1", 00:17:42.374 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:42.374 "strip_size_kb": 0, 00:17:42.374 "state": "online", 00:17:42.374 "raid_level": "raid1", 00:17:42.374 "superblock": true, 00:17:42.374 "num_base_bdevs": 2, 00:17:42.374 "num_base_bdevs_discovered": 1, 00:17:42.374 "num_base_bdevs_operational": 1, 00:17:42.374 "base_bdevs_list": [ 00:17:42.374 { 00:17:42.374 "name": null, 00:17:42.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.374 "is_configured": false, 00:17:42.374 "data_offset": 0, 00:17:42.374 "data_size": 7936 00:17:42.374 }, 00:17:42.374 { 00:17:42.374 "name": "BaseBdev2", 00:17:42.374 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:42.374 "is_configured": true, 00:17:42.374 "data_offset": 256, 00:17:42.374 "data_size": 7936 00:17:42.374 } 00:17:42.374 ] 00:17:42.374 }' 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 [2024-11-08 16:59:11.728246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.374 [2024-11-08 16:59:11.732888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.374 16:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.374 [2024-11-08 16:59:11.735144] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.309 "name": "raid_bdev1", 00:17:43.309 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:43.309 "strip_size_kb": 0, 00:17:43.309 "state": "online", 00:17:43.309 "raid_level": "raid1", 00:17:43.309 "superblock": true, 00:17:43.309 "num_base_bdevs": 2, 00:17:43.309 "num_base_bdevs_discovered": 2, 00:17:43.309 "num_base_bdevs_operational": 2, 00:17:43.309 "process": { 00:17:43.309 "type": "rebuild", 00:17:43.309 "target": "spare", 00:17:43.309 "progress": { 00:17:43.309 "blocks": 2560, 00:17:43.309 "percent": 32 00:17:43.309 } 00:17:43.309 }, 00:17:43.309 "base_bdevs_list": [ 00:17:43.309 { 00:17:43.309 "name": "spare", 00:17:43.309 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:43.309 "is_configured": true, 00:17:43.309 "data_offset": 256, 00:17:43.309 "data_size": 7936 00:17:43.309 }, 00:17:43.309 { 00:17:43.309 "name": "BaseBdev2", 00:17:43.309 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:43.309 "is_configured": true, 00:17:43.309 "data_offset": 256, 00:17:43.309 "data_size": 7936 00:17:43.309 } 00:17:43.309 ] 00:17:43.309 }' 00:17:43.309 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:43.569 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=577 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.569 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.570 "name": "raid_bdev1", 00:17:43.570 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:43.570 "strip_size_kb": 0, 00:17:43.570 "state": "online", 00:17:43.570 "raid_level": "raid1", 00:17:43.570 "superblock": true, 00:17:43.570 "num_base_bdevs": 2, 00:17:43.570 "num_base_bdevs_discovered": 2, 00:17:43.570 "num_base_bdevs_operational": 2, 00:17:43.570 "process": { 00:17:43.570 "type": "rebuild", 00:17:43.570 "target": "spare", 00:17:43.570 "progress": { 00:17:43.570 "blocks": 2816, 00:17:43.570 "percent": 35 00:17:43.570 } 00:17:43.570 }, 00:17:43.570 "base_bdevs_list": [ 00:17:43.570 { 00:17:43.570 "name": "spare", 00:17:43.570 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:43.570 "is_configured": true, 00:17:43.570 "data_offset": 256, 00:17:43.570 "data_size": 7936 00:17:43.570 }, 00:17:43.570 { 00:17:43.570 "name": "BaseBdev2", 00:17:43.570 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:43.570 "is_configured": true, 00:17:43.570 "data_offset": 256, 00:17:43.570 "data_size": 7936 00:17:43.570 } 00:17:43.570 ] 00:17:43.570 }' 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.570 16:59:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.570 16:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.570 16:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.947 "name": "raid_bdev1", 00:17:44.947 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:44.947 "strip_size_kb": 0, 00:17:44.947 "state": "online", 00:17:44.947 "raid_level": "raid1", 00:17:44.947 "superblock": true, 00:17:44.947 "num_base_bdevs": 2, 00:17:44.947 "num_base_bdevs_discovered": 2, 00:17:44.947 "num_base_bdevs_operational": 2, 00:17:44.947 "process": { 00:17:44.947 "type": "rebuild", 00:17:44.947 "target": "spare", 00:17:44.947 "progress": { 00:17:44.947 "blocks": 5632, 00:17:44.947 "percent": 70 00:17:44.947 } 00:17:44.947 }, 00:17:44.947 "base_bdevs_list": [ 00:17:44.947 { 00:17:44.947 "name": "spare", 00:17:44.947 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:44.947 "is_configured": true, 00:17:44.947 "data_offset": 256, 00:17:44.947 "data_size": 7936 00:17:44.947 }, 00:17:44.947 { 00:17:44.947 "name": "BaseBdev2", 00:17:44.947 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:44.947 "is_configured": true, 00:17:44.947 "data_offset": 256, 00:17:44.947 "data_size": 7936 00:17:44.947 } 00:17:44.947 ] 00:17:44.947 }' 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.947 16:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.516 [2024-11-08 16:59:14.850231] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:45.516 [2024-11-08 16:59:14.850358] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:45.516 [2024-11-08 16:59:14.850510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.776 "name": "raid_bdev1", 00:17:45.776 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:45.776 "strip_size_kb": 0, 00:17:45.776 "state": "online", 00:17:45.776 "raid_level": "raid1", 00:17:45.776 "superblock": true, 00:17:45.776 "num_base_bdevs": 2, 00:17:45.776 "num_base_bdevs_discovered": 2, 00:17:45.776 "num_base_bdevs_operational": 2, 00:17:45.776 "base_bdevs_list": [ 00:17:45.776 { 00:17:45.776 "name": "spare", 00:17:45.776 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:45.776 "is_configured": true, 00:17:45.776 "data_offset": 256, 00:17:45.776 "data_size": 7936 00:17:45.776 }, 00:17:45.776 { 00:17:45.776 "name": "BaseBdev2", 00:17:45.776 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:45.776 "is_configured": true, 00:17:45.776 "data_offset": 256, 00:17:45.776 "data_size": 7936 00:17:45.776 } 00:17:45.776 ] 00:17:45.776 }' 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:45.776 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.036 "name": "raid_bdev1", 00:17:46.036 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:46.036 "strip_size_kb": 0, 00:17:46.036 "state": "online", 00:17:46.036 "raid_level": "raid1", 00:17:46.036 "superblock": true, 00:17:46.036 "num_base_bdevs": 2, 00:17:46.036 "num_base_bdevs_discovered": 2, 00:17:46.036 "num_base_bdevs_operational": 2, 00:17:46.036 "base_bdevs_list": [ 00:17:46.036 { 00:17:46.036 "name": "spare", 00:17:46.036 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:46.036 "is_configured": true, 00:17:46.036 "data_offset": 256, 00:17:46.036 "data_size": 7936 00:17:46.036 }, 00:17:46.036 { 00:17:46.036 "name": "BaseBdev2", 00:17:46.036 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:46.036 "is_configured": true, 00:17:46.036 "data_offset": 256, 00:17:46.036 "data_size": 7936 00:17:46.036 } 00:17:46.036 ] 00:17:46.036 }' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.036 "name": "raid_bdev1", 00:17:46.036 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:46.036 "strip_size_kb": 0, 00:17:46.036 "state": "online", 00:17:46.036 "raid_level": "raid1", 00:17:46.036 "superblock": true, 00:17:46.036 "num_base_bdevs": 2, 00:17:46.036 "num_base_bdevs_discovered": 2, 00:17:46.036 "num_base_bdevs_operational": 2, 00:17:46.036 "base_bdevs_list": [ 00:17:46.036 { 00:17:46.036 "name": "spare", 00:17:46.036 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:46.036 "is_configured": true, 00:17:46.036 "data_offset": 256, 00:17:46.036 "data_size": 7936 00:17:46.036 }, 00:17:46.036 { 00:17:46.036 "name": "BaseBdev2", 00:17:46.036 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:46.036 "is_configured": true, 00:17:46.036 "data_offset": 256, 00:17:46.036 "data_size": 7936 00:17:46.036 } 00:17:46.036 ] 00:17:46.036 }' 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.036 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.606 [2024-11-08 16:59:15.913309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.606 [2024-11-08 16:59:15.913346] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.606 [2024-11-08 16:59:15.913466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.606 [2024-11-08 16:59:15.913546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.606 [2024-11-08 16:59:15.913565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.606 16:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:46.866 /dev/nbd0 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.866 1+0 records in 00:17:46.866 1+0 records out 00:17:46.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385546 s, 10.6 MB/s 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:46.866 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:46.867 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.867 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.867 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:47.127 /dev/nbd1 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.127 1+0 records in 00:17:47.127 1+0 records out 00:17:47.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418718 s, 9.8 MB/s 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.127 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.386 16:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.645 [2024-11-08 16:59:17.080909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.645 [2024-11-08 16:59:17.080997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.645 [2024-11-08 16:59:17.081025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:47.645 [2024-11-08 16:59:17.081048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.645 [2024-11-08 16:59:17.083565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.645 [2024-11-08 16:59:17.083619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.645 [2024-11-08 16:59:17.083741] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.645 [2024-11-08 16:59:17.083821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.645 [2024-11-08 16:59:17.083980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.645 spare 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.645 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.904 [2024-11-08 16:59:17.183924] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:47.904 [2024-11-08 16:59:17.183975] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.904 [2024-11-08 16:59:17.184343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:17:47.904 [2024-11-08 16:59:17.184549] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:47.904 [2024-11-08 16:59:17.184574] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:47.904 [2024-11-08 16:59:17.184831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.904 "name": "raid_bdev1", 00:17:47.904 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:47.904 "strip_size_kb": 0, 00:17:47.904 "state": "online", 00:17:47.904 "raid_level": "raid1", 00:17:47.904 "superblock": true, 00:17:47.904 "num_base_bdevs": 2, 00:17:47.904 "num_base_bdevs_discovered": 2, 00:17:47.904 "num_base_bdevs_operational": 2, 00:17:47.904 "base_bdevs_list": [ 00:17:47.904 { 00:17:47.904 "name": "spare", 00:17:47.904 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:47.904 "is_configured": true, 00:17:47.904 "data_offset": 256, 00:17:47.904 "data_size": 7936 00:17:47.904 }, 00:17:47.904 { 00:17:47.904 "name": "BaseBdev2", 00:17:47.904 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:47.904 "is_configured": true, 00:17:47.904 "data_offset": 256, 00:17:47.904 "data_size": 7936 00:17:47.904 } 00:17:47.904 ] 00:17:47.904 }' 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.904 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.163 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.423 "name": "raid_bdev1", 00:17:48.423 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:48.423 "strip_size_kb": 0, 00:17:48.423 "state": "online", 00:17:48.423 "raid_level": "raid1", 00:17:48.423 "superblock": true, 00:17:48.423 "num_base_bdevs": 2, 00:17:48.423 "num_base_bdevs_discovered": 2, 00:17:48.423 "num_base_bdevs_operational": 2, 00:17:48.423 "base_bdevs_list": [ 00:17:48.423 { 00:17:48.423 "name": "spare", 00:17:48.423 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:48.423 "is_configured": true, 00:17:48.423 "data_offset": 256, 00:17:48.423 "data_size": 7936 00:17:48.423 }, 00:17:48.423 { 00:17:48.423 "name": "BaseBdev2", 00:17:48.423 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:48.423 "is_configured": true, 00:17:48.423 "data_offset": 256, 00:17:48.423 "data_size": 7936 00:17:48.423 } 00:17:48.423 ] 00:17:48.423 }' 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.423 [2024-11-08 16:59:17.831819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.423 "name": "raid_bdev1", 00:17:48.423 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:48.423 "strip_size_kb": 0, 00:17:48.423 "state": "online", 00:17:48.423 "raid_level": "raid1", 00:17:48.423 "superblock": true, 00:17:48.423 "num_base_bdevs": 2, 00:17:48.423 "num_base_bdevs_discovered": 1, 00:17:48.423 "num_base_bdevs_operational": 1, 00:17:48.423 "base_bdevs_list": [ 00:17:48.423 { 00:17:48.423 "name": null, 00:17:48.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.423 "is_configured": false, 00:17:48.423 "data_offset": 0, 00:17:48.423 "data_size": 7936 00:17:48.423 }, 00:17:48.423 { 00:17:48.423 "name": "BaseBdev2", 00:17:48.423 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:48.423 "is_configured": true, 00:17:48.423 "data_offset": 256, 00:17:48.423 "data_size": 7936 00:17:48.423 } 00:17:48.423 ] 00:17:48.423 }' 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.423 16:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.034 16:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.034 16:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.034 16:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.034 [2024-11-08 16:59:18.283132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.034 [2024-11-08 16:59:18.283367] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.034 [2024-11-08 16:59:18.283390] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:49.034 [2024-11-08 16:59:18.283458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.034 [2024-11-08 16:59:18.287750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:17:49.034 16:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.034 16:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:49.034 [2024-11-08 16:59:18.289778] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.975 "name": "raid_bdev1", 00:17:49.975 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:49.975 "strip_size_kb": 0, 00:17:49.975 "state": "online", 00:17:49.975 "raid_level": "raid1", 00:17:49.975 "superblock": true, 00:17:49.975 "num_base_bdevs": 2, 00:17:49.975 "num_base_bdevs_discovered": 2, 00:17:49.975 "num_base_bdevs_operational": 2, 00:17:49.975 "process": { 00:17:49.975 "type": "rebuild", 00:17:49.975 "target": "spare", 00:17:49.975 "progress": { 00:17:49.975 "blocks": 2560, 00:17:49.975 "percent": 32 00:17:49.975 } 00:17:49.975 }, 00:17:49.975 "base_bdevs_list": [ 00:17:49.975 { 00:17:49.975 "name": "spare", 00:17:49.975 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:49.975 "is_configured": true, 00:17:49.975 "data_offset": 256, 00:17:49.975 "data_size": 7936 00:17:49.975 }, 00:17:49.975 { 00:17:49.975 "name": "BaseBdev2", 00:17:49.975 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:49.975 "is_configured": true, 00:17:49.975 "data_offset": 256, 00:17:49.975 "data_size": 7936 00:17:49.975 } 00:17:49.975 ] 00:17:49.975 }' 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.975 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.975 [2024-11-08 16:59:19.449611] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.975 [2024-11-08 16:59:19.495085] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.975 [2024-11-08 16:59:19.495161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.975 [2024-11-08 16:59:19.495183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.975 [2024-11-08 16:59:19.495193] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.235 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.236 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.236 "name": "raid_bdev1", 00:17:50.236 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:50.237 "strip_size_kb": 0, 00:17:50.237 "state": "online", 00:17:50.237 "raid_level": "raid1", 00:17:50.237 "superblock": true, 00:17:50.237 "num_base_bdevs": 2, 00:17:50.237 "num_base_bdevs_discovered": 1, 00:17:50.237 "num_base_bdevs_operational": 1, 00:17:50.237 "base_bdevs_list": [ 00:17:50.237 { 00:17:50.237 "name": null, 00:17:50.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.237 "is_configured": false, 00:17:50.237 "data_offset": 0, 00:17:50.237 "data_size": 7936 00:17:50.237 }, 00:17:50.237 { 00:17:50.237 "name": "BaseBdev2", 00:17:50.237 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:50.237 "is_configured": true, 00:17:50.237 "data_offset": 256, 00:17:50.237 "data_size": 7936 00:17:50.237 } 00:17:50.237 ] 00:17:50.237 }' 00:17:50.238 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.238 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.498 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:50.498 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.498 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.498 [2024-11-08 16:59:19.974913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:50.498 [2024-11-08 16:59:19.974990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.498 [2024-11-08 16:59:19.975020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:50.498 [2024-11-08 16:59:19.975033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.498 [2024-11-08 16:59:19.975587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.498 [2024-11-08 16:59:19.975623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:50.498 [2024-11-08 16:59:19.975752] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:50.498 [2024-11-08 16:59:19.975777] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.498 [2024-11-08 16:59:19.975799] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.498 [2024-11-08 16:59:19.975829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.498 [2024-11-08 16:59:19.980191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:50.498 spare 00:17:50.498 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.498 16:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:50.498 [2024-11-08 16:59:19.982409] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.876 16:59:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.876 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.876 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.876 "name": "raid_bdev1", 00:17:51.876 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:51.876 "strip_size_kb": 0, 00:17:51.876 "state": "online", 00:17:51.876 "raid_level": "raid1", 00:17:51.876 "superblock": true, 00:17:51.876 "num_base_bdevs": 2, 00:17:51.876 "num_base_bdevs_discovered": 2, 00:17:51.876 "num_base_bdevs_operational": 2, 00:17:51.876 "process": { 00:17:51.876 "type": "rebuild", 00:17:51.876 "target": "spare", 00:17:51.876 "progress": { 00:17:51.876 "blocks": 2560, 00:17:51.876 "percent": 32 00:17:51.876 } 00:17:51.876 }, 00:17:51.876 "base_bdevs_list": [ 00:17:51.876 { 00:17:51.876 "name": "spare", 00:17:51.876 "uuid": "08a3faf6-cb38-5a15-9aa5-44e66b2760d4", 00:17:51.876 "is_configured": true, 00:17:51.876 "data_offset": 256, 00:17:51.876 "data_size": 7936 00:17:51.876 }, 00:17:51.877 { 00:17:51.877 "name": "BaseBdev2", 00:17:51.877 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:51.877 "is_configured": true, 00:17:51.877 "data_offset": 256, 00:17:51.877 "data_size": 7936 00:17:51.877 } 00:17:51.877 ] 00:17:51.877 }' 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 [2024-11-08 16:59:21.138754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.877 [2024-11-08 16:59:21.187696] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.877 [2024-11-08 16:59:21.187780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.877 [2024-11-08 16:59:21.187798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.877 [2024-11-08 16:59:21.187809] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.877 "name": "raid_bdev1", 00:17:51.877 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:51.877 "strip_size_kb": 0, 00:17:51.877 "state": "online", 00:17:51.877 "raid_level": "raid1", 00:17:51.877 "superblock": true, 00:17:51.877 "num_base_bdevs": 2, 00:17:51.877 "num_base_bdevs_discovered": 1, 00:17:51.877 "num_base_bdevs_operational": 1, 00:17:51.877 "base_bdevs_list": [ 00:17:51.877 { 00:17:51.877 "name": null, 00:17:51.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.877 "is_configured": false, 00:17:51.877 "data_offset": 0, 00:17:51.877 "data_size": 7936 00:17:51.877 }, 00:17:51.877 { 00:17:51.877 "name": "BaseBdev2", 00:17:51.877 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:51.877 "is_configured": true, 00:17:51.877 "data_offset": 256, 00:17:51.877 "data_size": 7936 00:17:51.877 } 00:17:51.877 ] 00:17:51.877 }' 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.877 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.135 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.136 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.136 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.395 "name": "raid_bdev1", 00:17:52.395 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:52.395 "strip_size_kb": 0, 00:17:52.395 "state": "online", 00:17:52.395 "raid_level": "raid1", 00:17:52.395 "superblock": true, 00:17:52.395 "num_base_bdevs": 2, 00:17:52.395 "num_base_bdevs_discovered": 1, 00:17:52.395 "num_base_bdevs_operational": 1, 00:17:52.395 "base_bdevs_list": [ 00:17:52.395 { 00:17:52.395 "name": null, 00:17:52.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.395 "is_configured": false, 00:17:52.395 "data_offset": 0, 00:17:52.395 "data_size": 7936 00:17:52.395 }, 00:17:52.395 { 00:17:52.395 "name": "BaseBdev2", 00:17:52.395 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:52.395 "is_configured": true, 00:17:52.395 "data_offset": 256, 00:17:52.395 "data_size": 7936 00:17:52.395 } 00:17:52.395 ] 00:17:52.395 }' 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.395 [2024-11-08 16:59:21.763455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:52.395 [2024-11-08 16:59:21.763535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.395 [2024-11-08 16:59:21.763559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:52.395 [2024-11-08 16:59:21.763575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.395 [2024-11-08 16:59:21.764105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.395 [2024-11-08 16:59:21.764153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:52.395 [2024-11-08 16:59:21.764247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:52.395 [2024-11-08 16:59:21.764280] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.395 [2024-11-08 16:59:21.764295] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:52.395 [2024-11-08 16:59:21.764315] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:52.395 BaseBdev1 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.395 16:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:53.342 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.342 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.342 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.342 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.342 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.342 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.343 "name": "raid_bdev1", 00:17:53.343 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:53.343 "strip_size_kb": 0, 00:17:53.343 "state": "online", 00:17:53.343 "raid_level": "raid1", 00:17:53.343 "superblock": true, 00:17:53.343 "num_base_bdevs": 2, 00:17:53.343 "num_base_bdevs_discovered": 1, 00:17:53.343 "num_base_bdevs_operational": 1, 00:17:53.343 "base_bdevs_list": [ 00:17:53.343 { 00:17:53.343 "name": null, 00:17:53.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.343 "is_configured": false, 00:17:53.343 "data_offset": 0, 00:17:53.343 "data_size": 7936 00:17:53.343 }, 00:17:53.343 { 00:17:53.343 "name": "BaseBdev2", 00:17:53.343 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:53.343 "is_configured": true, 00:17:53.343 "data_offset": 256, 00:17:53.343 "data_size": 7936 00:17:53.343 } 00:17:53.343 ] 00:17:53.343 }' 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.343 16:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.910 "name": "raid_bdev1", 00:17:53.910 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:53.910 "strip_size_kb": 0, 00:17:53.910 "state": "online", 00:17:53.910 "raid_level": "raid1", 00:17:53.910 "superblock": true, 00:17:53.910 "num_base_bdevs": 2, 00:17:53.910 "num_base_bdevs_discovered": 1, 00:17:53.910 "num_base_bdevs_operational": 1, 00:17:53.910 "base_bdevs_list": [ 00:17:53.910 { 00:17:53.910 "name": null, 00:17:53.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.910 "is_configured": false, 00:17:53.910 "data_offset": 0, 00:17:53.910 "data_size": 7936 00:17:53.910 }, 00:17:53.910 { 00:17:53.910 "name": "BaseBdev2", 00:17:53.910 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:53.910 "is_configured": true, 00:17:53.910 "data_offset": 256, 00:17:53.910 "data_size": 7936 00:17:53.910 } 00:17:53.910 ] 00:17:53.910 }' 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.910 [2024-11-08 16:59:23.388824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.910 [2024-11-08 16:59:23.389017] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.910 [2024-11-08 16:59:23.389031] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:53.910 request: 00:17:53.910 { 00:17:53.910 "base_bdev": "BaseBdev1", 00:17:53.910 "raid_bdev": "raid_bdev1", 00:17:53.910 "method": "bdev_raid_add_base_bdev", 00:17:53.910 "req_id": 1 00:17:53.910 } 00:17:53.910 Got JSON-RPC error response 00:17:53.910 response: 00:17:53.910 { 00:17:53.910 "code": -22, 00:17:53.910 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:53.910 } 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.910 16:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.287 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.287 "name": "raid_bdev1", 00:17:55.287 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:55.287 "strip_size_kb": 0, 00:17:55.288 "state": "online", 00:17:55.288 "raid_level": "raid1", 00:17:55.288 "superblock": true, 00:17:55.288 "num_base_bdevs": 2, 00:17:55.288 "num_base_bdevs_discovered": 1, 00:17:55.288 "num_base_bdevs_operational": 1, 00:17:55.288 "base_bdevs_list": [ 00:17:55.288 { 00:17:55.288 "name": null, 00:17:55.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.288 "is_configured": false, 00:17:55.288 "data_offset": 0, 00:17:55.288 "data_size": 7936 00:17:55.288 }, 00:17:55.288 { 00:17:55.288 "name": "BaseBdev2", 00:17:55.288 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:55.288 "is_configured": true, 00:17:55.288 "data_offset": 256, 00:17:55.288 "data_size": 7936 00:17:55.288 } 00:17:55.288 ] 00:17:55.288 }' 00:17:55.288 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.288 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.547 "name": "raid_bdev1", 00:17:55.547 "uuid": "4bc98dd3-ac9b-4b8a-aaa8-dd3b959bf531", 00:17:55.547 "strip_size_kb": 0, 00:17:55.547 "state": "online", 00:17:55.547 "raid_level": "raid1", 00:17:55.547 "superblock": true, 00:17:55.547 "num_base_bdevs": 2, 00:17:55.547 "num_base_bdevs_discovered": 1, 00:17:55.547 "num_base_bdevs_operational": 1, 00:17:55.547 "base_bdevs_list": [ 00:17:55.547 { 00:17:55.547 "name": null, 00:17:55.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.547 "is_configured": false, 00:17:55.547 "data_offset": 0, 00:17:55.547 "data_size": 7936 00:17:55.547 }, 00:17:55.547 { 00:17:55.547 "name": "BaseBdev2", 00:17:55.547 "uuid": "fdbf4ed3-5d78-5fe6-83c6-8b4b5da1c747", 00:17:55.547 "is_configured": true, 00:17:55.547 "data_offset": 256, 00:17:55.547 "data_size": 7936 00:17:55.547 } 00:17:55.547 ] 00:17:55.547 }' 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96988 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96988 ']' 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96988 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.547 16:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96988 00:17:55.547 killing process with pid 96988 00:17:55.547 Received shutdown signal, test time was about 60.000000 seconds 00:17:55.547 00:17:55.547 Latency(us) 00:17:55.547 [2024-11-08T16:59:25.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.547 [2024-11-08T16:59:25.075Z] =================================================================================================================== 00:17:55.547 [2024-11-08T16:59:25.075Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.547 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.547 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.547 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96988' 00:17:55.547 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96988 00:17:55.547 [2024-11-08 16:59:25.031471] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.547 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96988 00:17:55.547 [2024-11-08 16:59:25.031681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.547 [2024-11-08 16:59:25.031745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.547 [2024-11-08 16:59:25.031756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:55.547 [2024-11-08 16:59:25.064321] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.806 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:55.806 00:17:55.806 real 0m18.503s 00:17:55.806 user 0m24.606s 00:17:55.806 sys 0m2.553s 00:17:55.806 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.806 16:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.806 ************************************ 00:17:55.806 END TEST raid_rebuild_test_sb_4k 00:17:55.806 ************************************ 00:17:56.065 16:59:25 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:56.065 16:59:25 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:56.065 16:59:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:56.065 16:59:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.065 16:59:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.065 ************************************ 00:17:56.065 START TEST raid_state_function_test_sb_md_separate 00:17:56.065 ************************************ 00:17:56.065 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:56.065 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97666 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97666' 00:17:56.066 Process raid pid: 97666 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97666 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97666 ']' 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.066 16:59:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.066 [2024-11-08 16:59:25.457836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:56.066 [2024-11-08 16:59:25.458063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.324 [2024-11-08 16:59:25.623859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.324 [2024-11-08 16:59:25.674714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.324 [2024-11-08 16:59:25.719556] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.324 [2024-11-08 16:59:25.719599] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.932 [2024-11-08 16:59:26.322368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.932 [2024-11-08 16:59:26.322443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.932 [2024-11-08 16:59:26.322460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.932 [2024-11-08 16:59:26.322475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.932 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.932 "name": "Existed_Raid", 00:17:56.932 "uuid": "82f6968e-8d45-40e5-b4db-5e9c046da14d", 00:17:56.932 "strip_size_kb": 0, 00:17:56.932 "state": "configuring", 00:17:56.933 "raid_level": "raid1", 00:17:56.933 "superblock": true, 00:17:56.933 "num_base_bdevs": 2, 00:17:56.933 "num_base_bdevs_discovered": 0, 00:17:56.933 "num_base_bdevs_operational": 2, 00:17:56.933 "base_bdevs_list": [ 00:17:56.933 { 00:17:56.933 "name": "BaseBdev1", 00:17:56.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.933 "is_configured": false, 00:17:56.933 "data_offset": 0, 00:17:56.933 "data_size": 0 00:17:56.933 }, 00:17:56.933 { 00:17:56.933 "name": "BaseBdev2", 00:17:56.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.933 "is_configured": false, 00:17:56.933 "data_offset": 0, 00:17:56.933 "data_size": 0 00:17:56.933 } 00:17:56.933 ] 00:17:56.933 }' 00:17:56.933 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.933 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 [2024-11-08 16:59:26.781479] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.500 [2024-11-08 16:59:26.781614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 [2024-11-08 16:59:26.793493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.500 [2024-11-08 16:59:26.793617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.500 [2024-11-08 16:59:26.793696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.500 [2024-11-08 16:59:26.793735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 [2024-11-08 16:59:26.815486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.500 BaseBdev1 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:57.500 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.501 [ 00:17:57.501 { 00:17:57.501 "name": "BaseBdev1", 00:17:57.501 "aliases": [ 00:17:57.501 "d3c8ee75-95b8-4c34-878d-d49a705c9359" 00:17:57.501 ], 00:17:57.501 "product_name": "Malloc disk", 00:17:57.501 "block_size": 4096, 00:17:57.501 "num_blocks": 8192, 00:17:57.501 "uuid": "d3c8ee75-95b8-4c34-878d-d49a705c9359", 00:17:57.501 "md_size": 32, 00:17:57.501 "md_interleave": false, 00:17:57.501 "dif_type": 0, 00:17:57.501 "assigned_rate_limits": { 00:17:57.501 "rw_ios_per_sec": 0, 00:17:57.501 "rw_mbytes_per_sec": 0, 00:17:57.501 "r_mbytes_per_sec": 0, 00:17:57.501 "w_mbytes_per_sec": 0 00:17:57.501 }, 00:17:57.501 "claimed": true, 00:17:57.501 "claim_type": "exclusive_write", 00:17:57.501 "zoned": false, 00:17:57.501 "supported_io_types": { 00:17:57.501 "read": true, 00:17:57.501 "write": true, 00:17:57.501 "unmap": true, 00:17:57.501 "flush": true, 00:17:57.501 "reset": true, 00:17:57.501 "nvme_admin": false, 00:17:57.501 "nvme_io": false, 00:17:57.501 "nvme_io_md": false, 00:17:57.501 "write_zeroes": true, 00:17:57.501 "zcopy": true, 00:17:57.501 "get_zone_info": false, 00:17:57.501 "zone_management": false, 00:17:57.501 "zone_append": false, 00:17:57.501 "compare": false, 00:17:57.501 "compare_and_write": false, 00:17:57.501 "abort": true, 00:17:57.501 "seek_hole": false, 00:17:57.501 "seek_data": false, 00:17:57.501 "copy": true, 00:17:57.501 "nvme_iov_md": false 00:17:57.501 }, 00:17:57.501 "memory_domains": [ 00:17:57.501 { 00:17:57.501 "dma_device_id": "system", 00:17:57.501 "dma_device_type": 1 00:17:57.501 }, 00:17:57.501 { 00:17:57.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.501 "dma_device_type": 2 00:17:57.501 } 00:17:57.501 ], 00:17:57.501 "driver_specific": {} 00:17:57.501 } 00:17:57.501 ] 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.501 "name": "Existed_Raid", 00:17:57.501 "uuid": "3171d4ef-2874-4b01-82d5-ddd6a50d55a3", 00:17:57.501 "strip_size_kb": 0, 00:17:57.501 "state": "configuring", 00:17:57.501 "raid_level": "raid1", 00:17:57.501 "superblock": true, 00:17:57.501 "num_base_bdevs": 2, 00:17:57.501 "num_base_bdevs_discovered": 1, 00:17:57.501 "num_base_bdevs_operational": 2, 00:17:57.501 "base_bdevs_list": [ 00:17:57.501 { 00:17:57.501 "name": "BaseBdev1", 00:17:57.501 "uuid": "d3c8ee75-95b8-4c34-878d-d49a705c9359", 00:17:57.501 "is_configured": true, 00:17:57.501 "data_offset": 256, 00:17:57.501 "data_size": 7936 00:17:57.501 }, 00:17:57.501 { 00:17:57.501 "name": "BaseBdev2", 00:17:57.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.501 "is_configured": false, 00:17:57.501 "data_offset": 0, 00:17:57.501 "data_size": 0 00:17:57.501 } 00:17:57.501 ] 00:17:57.501 }' 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.501 16:59:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.760 [2024-11-08 16:59:27.270806] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.760 [2024-11-08 16:59:27.270953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.760 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.760 [2024-11-08 16:59:27.282896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.760 [2024-11-08 16:59:27.285030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.760 [2024-11-08 16:59:27.285122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.019 "name": "Existed_Raid", 00:17:58.019 "uuid": "00e375ec-f6fc-4fe2-98d5-db6751f6ffc2", 00:17:58.019 "strip_size_kb": 0, 00:17:58.019 "state": "configuring", 00:17:58.019 "raid_level": "raid1", 00:17:58.019 "superblock": true, 00:17:58.019 "num_base_bdevs": 2, 00:17:58.019 "num_base_bdevs_discovered": 1, 00:17:58.019 "num_base_bdevs_operational": 2, 00:17:58.019 "base_bdevs_list": [ 00:17:58.019 { 00:17:58.019 "name": "BaseBdev1", 00:17:58.019 "uuid": "d3c8ee75-95b8-4c34-878d-d49a705c9359", 00:17:58.019 "is_configured": true, 00:17:58.019 "data_offset": 256, 00:17:58.019 "data_size": 7936 00:17:58.019 }, 00:17:58.019 { 00:17:58.019 "name": "BaseBdev2", 00:17:58.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.019 "is_configured": false, 00:17:58.019 "data_offset": 0, 00:17:58.019 "data_size": 0 00:17:58.019 } 00:17:58.019 ] 00:17:58.019 }' 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.019 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 [2024-11-08 16:59:27.753026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.279 [2024-11-08 16:59:27.753423] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:58.279 [2024-11-08 16:59:27.753502] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.279 [2024-11-08 16:59:27.753703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:58.279 [2024-11-08 16:59:27.753882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:58.279 [2024-11-08 16:59:27.753960] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:17:58.279 BaseBdev2 00:17:58.279 [2024-11-08 16:59:27.754134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.279 [ 00:17:58.279 { 00:17:58.279 "name": "BaseBdev2", 00:17:58.279 "aliases": [ 00:17:58.279 "3b39b1fa-b1c6-47ea-97e1-810706fb2c2b" 00:17:58.279 ], 00:17:58.279 "product_name": "Malloc disk", 00:17:58.279 "block_size": 4096, 00:17:58.279 "num_blocks": 8192, 00:17:58.279 "uuid": "3b39b1fa-b1c6-47ea-97e1-810706fb2c2b", 00:17:58.279 "md_size": 32, 00:17:58.279 "md_interleave": false, 00:17:58.279 "dif_type": 0, 00:17:58.279 "assigned_rate_limits": { 00:17:58.279 "rw_ios_per_sec": 0, 00:17:58.279 "rw_mbytes_per_sec": 0, 00:17:58.279 "r_mbytes_per_sec": 0, 00:17:58.279 "w_mbytes_per_sec": 0 00:17:58.279 }, 00:17:58.279 "claimed": true, 00:17:58.279 "claim_type": "exclusive_write", 00:17:58.279 "zoned": false, 00:17:58.279 "supported_io_types": { 00:17:58.279 "read": true, 00:17:58.279 "write": true, 00:17:58.279 "unmap": true, 00:17:58.279 "flush": true, 00:17:58.279 "reset": true, 00:17:58.279 "nvme_admin": false, 00:17:58.279 "nvme_io": false, 00:17:58.279 "nvme_io_md": false, 00:17:58.279 "write_zeroes": true, 00:17:58.279 "zcopy": true, 00:17:58.279 "get_zone_info": false, 00:17:58.279 "zone_management": false, 00:17:58.279 "zone_append": false, 00:17:58.279 "compare": false, 00:17:58.279 "compare_and_write": false, 00:17:58.279 "abort": true, 00:17:58.279 "seek_hole": false, 00:17:58.279 "seek_data": false, 00:17:58.279 "copy": true, 00:17:58.279 "nvme_iov_md": false 00:17:58.279 }, 00:17:58.279 "memory_domains": [ 00:17:58.279 { 00:17:58.279 "dma_device_id": "system", 00:17:58.279 "dma_device_type": 1 00:17:58.279 }, 00:17:58.279 { 00:17:58.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.279 "dma_device_type": 2 00:17:58.279 } 00:17:58.279 ], 00:17:58.279 "driver_specific": {} 00:17:58.279 } 00:17:58.279 ] 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.279 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.538 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.538 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.538 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.538 "name": "Existed_Raid", 00:17:58.538 "uuid": "00e375ec-f6fc-4fe2-98d5-db6751f6ffc2", 00:17:58.538 "strip_size_kb": 0, 00:17:58.538 "state": "online", 00:17:58.538 "raid_level": "raid1", 00:17:58.538 "superblock": true, 00:17:58.538 "num_base_bdevs": 2, 00:17:58.538 "num_base_bdevs_discovered": 2, 00:17:58.538 "num_base_bdevs_operational": 2, 00:17:58.538 "base_bdevs_list": [ 00:17:58.538 { 00:17:58.538 "name": "BaseBdev1", 00:17:58.538 "uuid": "d3c8ee75-95b8-4c34-878d-d49a705c9359", 00:17:58.538 "is_configured": true, 00:17:58.538 "data_offset": 256, 00:17:58.538 "data_size": 7936 00:17:58.538 }, 00:17:58.538 { 00:17:58.538 "name": "BaseBdev2", 00:17:58.538 "uuid": "3b39b1fa-b1c6-47ea-97e1-810706fb2c2b", 00:17:58.538 "is_configured": true, 00:17:58.538 "data_offset": 256, 00:17:58.538 "data_size": 7936 00:17:58.538 } 00:17:58.538 ] 00:17:58.538 }' 00:17:58.538 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.538 16:59:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.798 [2024-11-08 16:59:28.252659] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.798 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.798 "name": "Existed_Raid", 00:17:58.798 "aliases": [ 00:17:58.798 "00e375ec-f6fc-4fe2-98d5-db6751f6ffc2" 00:17:58.798 ], 00:17:58.798 "product_name": "Raid Volume", 00:17:58.798 "block_size": 4096, 00:17:58.798 "num_blocks": 7936, 00:17:58.798 "uuid": "00e375ec-f6fc-4fe2-98d5-db6751f6ffc2", 00:17:58.798 "md_size": 32, 00:17:58.798 "md_interleave": false, 00:17:58.798 "dif_type": 0, 00:17:58.798 "assigned_rate_limits": { 00:17:58.798 "rw_ios_per_sec": 0, 00:17:58.798 "rw_mbytes_per_sec": 0, 00:17:58.798 "r_mbytes_per_sec": 0, 00:17:58.798 "w_mbytes_per_sec": 0 00:17:58.798 }, 00:17:58.798 "claimed": false, 00:17:58.798 "zoned": false, 00:17:58.798 "supported_io_types": { 00:17:58.798 "read": true, 00:17:58.798 "write": true, 00:17:58.798 "unmap": false, 00:17:58.798 "flush": false, 00:17:58.798 "reset": true, 00:17:58.798 "nvme_admin": false, 00:17:58.798 "nvme_io": false, 00:17:58.798 "nvme_io_md": false, 00:17:58.798 "write_zeroes": true, 00:17:58.798 "zcopy": false, 00:17:58.798 "get_zone_info": false, 00:17:58.798 "zone_management": false, 00:17:58.798 "zone_append": false, 00:17:58.798 "compare": false, 00:17:58.798 "compare_and_write": false, 00:17:58.798 "abort": false, 00:17:58.798 "seek_hole": false, 00:17:58.798 "seek_data": false, 00:17:58.798 "copy": false, 00:17:58.798 "nvme_iov_md": false 00:17:58.798 }, 00:17:58.798 "memory_domains": [ 00:17:58.798 { 00:17:58.798 "dma_device_id": "system", 00:17:58.798 "dma_device_type": 1 00:17:58.798 }, 00:17:58.798 { 00:17:58.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.798 "dma_device_type": 2 00:17:58.798 }, 00:17:58.798 { 00:17:58.798 "dma_device_id": "system", 00:17:58.798 "dma_device_type": 1 00:17:58.798 }, 00:17:58.798 { 00:17:58.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.798 "dma_device_type": 2 00:17:58.798 } 00:17:58.798 ], 00:17:58.798 "driver_specific": { 00:17:58.798 "raid": { 00:17:58.798 "uuid": "00e375ec-f6fc-4fe2-98d5-db6751f6ffc2", 00:17:58.799 "strip_size_kb": 0, 00:17:58.799 "state": "online", 00:17:58.799 "raid_level": "raid1", 00:17:58.799 "superblock": true, 00:17:58.799 "num_base_bdevs": 2, 00:17:58.799 "num_base_bdevs_discovered": 2, 00:17:58.799 "num_base_bdevs_operational": 2, 00:17:58.799 "base_bdevs_list": [ 00:17:58.799 { 00:17:58.799 "name": "BaseBdev1", 00:17:58.799 "uuid": "d3c8ee75-95b8-4c34-878d-d49a705c9359", 00:17:58.799 "is_configured": true, 00:17:58.799 "data_offset": 256, 00:17:58.799 "data_size": 7936 00:17:58.799 }, 00:17:58.799 { 00:17:58.799 "name": "BaseBdev2", 00:17:58.799 "uuid": "3b39b1fa-b1c6-47ea-97e1-810706fb2c2b", 00:17:58.799 "is_configured": true, 00:17:58.799 "data_offset": 256, 00:17:58.799 "data_size": 7936 00:17:58.799 } 00:17:58.799 ] 00:17:58.799 } 00:17:58.799 } 00:17:58.799 }' 00:17:58.799 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.799 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:58.799 BaseBdev2' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 [2024-11-08 16:59:28.464051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.059 "name": "Existed_Raid", 00:17:59.059 "uuid": "00e375ec-f6fc-4fe2-98d5-db6751f6ffc2", 00:17:59.059 "strip_size_kb": 0, 00:17:59.059 "state": "online", 00:17:59.059 "raid_level": "raid1", 00:17:59.059 "superblock": true, 00:17:59.059 "num_base_bdevs": 2, 00:17:59.059 "num_base_bdevs_discovered": 1, 00:17:59.059 "num_base_bdevs_operational": 1, 00:17:59.059 "base_bdevs_list": [ 00:17:59.059 { 00:17:59.059 "name": null, 00:17:59.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.059 "is_configured": false, 00:17:59.059 "data_offset": 0, 00:17:59.059 "data_size": 7936 00:17:59.059 }, 00:17:59.059 { 00:17:59.059 "name": "BaseBdev2", 00:17:59.059 "uuid": "3b39b1fa-b1c6-47ea-97e1-810706fb2c2b", 00:17:59.059 "is_configured": true, 00:17:59.059 "data_offset": 256, 00:17:59.059 "data_size": 7936 00:17:59.059 } 00:17:59.059 ] 00:17:59.059 }' 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.059 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.628 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.629 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:59.629 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.629 16:59:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 [2024-11-08 16:59:28.999897] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.629 [2024-11-08 16:59:29.000126] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.629 [2024-11-08 16:59:29.013153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.629 [2024-11-08 16:59:29.013207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.629 [2024-11-08 16:59:29.013222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97666 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97666 ']' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97666 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97666 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.629 killing process with pid 97666 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97666' 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97666 00:17:59.629 [2024-11-08 16:59:29.100629] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.629 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97666 00:17:59.629 [2024-11-08 16:59:29.102467] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.196 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:00.196 00:18:00.196 real 0m4.136s 00:18:00.196 user 0m6.396s 00:18:00.196 sys 0m0.850s 00:18:00.196 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.196 16:59:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.196 ************************************ 00:18:00.196 END TEST raid_state_function_test_sb_md_separate 00:18:00.196 ************************************ 00:18:00.196 16:59:29 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:00.196 16:59:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:00.196 16:59:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.196 16:59:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.196 ************************************ 00:18:00.196 START TEST raid_superblock_test_md_separate 00:18:00.196 ************************************ 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97903 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97903 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97903 ']' 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.196 16:59:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.196 [2024-11-08 16:59:29.661659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:00.196 [2024-11-08 16:59:29.661944] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97903 ] 00:18:00.458 [2024-11-08 16:59:29.833951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.458 [2024-11-08 16:59:29.917385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.723 [2024-11-08 16:59:30.000265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.723 [2024-11-08 16:59:30.000426] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.291 malloc1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.291 [2024-11-08 16:59:30.645555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.291 [2024-11-08 16:59:30.645913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.291 [2024-11-08 16:59:30.646088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:01.291 [2024-11-08 16:59:30.646233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.291 [2024-11-08 16:59:30.648753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.291 [2024-11-08 16:59:30.648845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.291 pt1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.291 malloc2 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.291 [2024-11-08 16:59:30.691089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.291 [2024-11-08 16:59:30.691902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.291 [2024-11-08 16:59:30.692078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:01.291 [2024-11-08 16:59:30.692141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.291 [2024-11-08 16:59:30.699878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.291 [2024-11-08 16:59:30.700018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.291 pt2 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.291 [2024-11-08 16:59:30.708480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.291 [2024-11-08 16:59:30.713071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.291 [2024-11-08 16:59:30.713511] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:01.291 [2024-11-08 16:59:30.713559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.291 [2024-11-08 16:59:30.713845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:01.291 [2024-11-08 16:59:30.714136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:01.291 [2024-11-08 16:59:30.714158] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:01.291 [2024-11-08 16:59:30.714340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.291 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.292 "name": "raid_bdev1", 00:18:01.292 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:01.292 "strip_size_kb": 0, 00:18:01.292 "state": "online", 00:18:01.292 "raid_level": "raid1", 00:18:01.292 "superblock": true, 00:18:01.292 "num_base_bdevs": 2, 00:18:01.292 "num_base_bdevs_discovered": 2, 00:18:01.292 "num_base_bdevs_operational": 2, 00:18:01.292 "base_bdevs_list": [ 00:18:01.292 { 00:18:01.292 "name": "pt1", 00:18:01.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.292 "is_configured": true, 00:18:01.292 "data_offset": 256, 00:18:01.292 "data_size": 7936 00:18:01.292 }, 00:18:01.292 { 00:18:01.292 "name": "pt2", 00:18:01.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.292 "is_configured": true, 00:18:01.292 "data_offset": 256, 00:18:01.292 "data_size": 7936 00:18:01.292 } 00:18:01.292 ] 00:18:01.292 }' 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.292 16:59:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.857 [2024-11-08 16:59:31.177153] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.857 "name": "raid_bdev1", 00:18:01.857 "aliases": [ 00:18:01.857 "54c26d28-f646-4927-b12d-02d1f440f8ef" 00:18:01.857 ], 00:18:01.857 "product_name": "Raid Volume", 00:18:01.857 "block_size": 4096, 00:18:01.857 "num_blocks": 7936, 00:18:01.857 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:01.857 "md_size": 32, 00:18:01.857 "md_interleave": false, 00:18:01.857 "dif_type": 0, 00:18:01.857 "assigned_rate_limits": { 00:18:01.857 "rw_ios_per_sec": 0, 00:18:01.857 "rw_mbytes_per_sec": 0, 00:18:01.857 "r_mbytes_per_sec": 0, 00:18:01.857 "w_mbytes_per_sec": 0 00:18:01.857 }, 00:18:01.857 "claimed": false, 00:18:01.857 "zoned": false, 00:18:01.857 "supported_io_types": { 00:18:01.857 "read": true, 00:18:01.857 "write": true, 00:18:01.857 "unmap": false, 00:18:01.857 "flush": false, 00:18:01.857 "reset": true, 00:18:01.857 "nvme_admin": false, 00:18:01.857 "nvme_io": false, 00:18:01.857 "nvme_io_md": false, 00:18:01.857 "write_zeroes": true, 00:18:01.857 "zcopy": false, 00:18:01.857 "get_zone_info": false, 00:18:01.857 "zone_management": false, 00:18:01.857 "zone_append": false, 00:18:01.857 "compare": false, 00:18:01.857 "compare_and_write": false, 00:18:01.857 "abort": false, 00:18:01.857 "seek_hole": false, 00:18:01.857 "seek_data": false, 00:18:01.857 "copy": false, 00:18:01.857 "nvme_iov_md": false 00:18:01.857 }, 00:18:01.857 "memory_domains": [ 00:18:01.857 { 00:18:01.857 "dma_device_id": "system", 00:18:01.857 "dma_device_type": 1 00:18:01.857 }, 00:18:01.857 { 00:18:01.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.857 "dma_device_type": 2 00:18:01.857 }, 00:18:01.857 { 00:18:01.857 "dma_device_id": "system", 00:18:01.857 "dma_device_type": 1 00:18:01.857 }, 00:18:01.857 { 00:18:01.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.857 "dma_device_type": 2 00:18:01.857 } 00:18:01.857 ], 00:18:01.857 "driver_specific": { 00:18:01.857 "raid": { 00:18:01.857 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:01.857 "strip_size_kb": 0, 00:18:01.857 "state": "online", 00:18:01.857 "raid_level": "raid1", 00:18:01.857 "superblock": true, 00:18:01.857 "num_base_bdevs": 2, 00:18:01.857 "num_base_bdevs_discovered": 2, 00:18:01.857 "num_base_bdevs_operational": 2, 00:18:01.857 "base_bdevs_list": [ 00:18:01.857 { 00:18:01.857 "name": "pt1", 00:18:01.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.857 "is_configured": true, 00:18:01.857 "data_offset": 256, 00:18:01.857 "data_size": 7936 00:18:01.857 }, 00:18:01.857 { 00:18:01.857 "name": "pt2", 00:18:01.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.857 "is_configured": true, 00:18:01.857 "data_offset": 256, 00:18:01.857 "data_size": 7936 00:18:01.857 } 00:18:01.857 ] 00:18:01.857 } 00:18:01.857 } 00:18:01.857 }' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.857 pt2' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.857 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.858 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.858 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 [2024-11-08 16:59:31.400670] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=54c26d28-f646-4927-b12d-02d1f440f8ef 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 54c26d28-f646-4927-b12d-02d1f440f8ef ']' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 [2024-11-08 16:59:31.444296] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.118 [2024-11-08 16:59:31.444409] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.118 [2024-11-08 16:59:31.444562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.118 [2024-11-08 16:59:31.444687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.118 [2024-11-08 16:59:31.444713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 [2024-11-08 16:59:31.584074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.118 [2024-11-08 16:59:31.586411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:02.118 [2024-11-08 16:59:31.586504] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:02.118 [2024-11-08 16:59:31.586586] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:02.118 [2024-11-08 16:59:31.586617] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.118 [2024-11-08 16:59:31.586649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:18:02.118 request: 00:18:02.118 { 00:18:02.118 "name": "raid_bdev1", 00:18:02.118 "raid_level": "raid1", 00:18:02.118 "base_bdevs": [ 00:18:02.118 "malloc1", 00:18:02.118 "malloc2" 00:18:02.118 ], 00:18:02.118 "superblock": false, 00:18:02.118 "method": "bdev_raid_create", 00:18:02.118 "req_id": 1 00:18:02.118 } 00:18:02.118 Got JSON-RPC error response 00:18:02.118 response: 00:18:02.118 { 00:18:02.118 "code": -17, 00:18:02.118 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:02.118 } 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.118 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.378 [2024-11-08 16:59:31.647887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.378 [2024-11-08 16:59:31.648034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.378 [2024-11-08 16:59:31.648105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:02.378 [2024-11-08 16:59:31.648160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.378 [2024-11-08 16:59:31.650675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.378 [2024-11-08 16:59:31.650777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.378 [2024-11-08 16:59:31.650900] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.378 [2024-11-08 16:59:31.651021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.378 pt1 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.378 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.378 "name": "raid_bdev1", 00:18:02.378 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:02.378 "strip_size_kb": 0, 00:18:02.378 "state": "configuring", 00:18:02.378 "raid_level": "raid1", 00:18:02.378 "superblock": true, 00:18:02.378 "num_base_bdevs": 2, 00:18:02.378 "num_base_bdevs_discovered": 1, 00:18:02.378 "num_base_bdevs_operational": 2, 00:18:02.378 "base_bdevs_list": [ 00:18:02.378 { 00:18:02.378 "name": "pt1", 00:18:02.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.378 "is_configured": true, 00:18:02.378 "data_offset": 256, 00:18:02.378 "data_size": 7936 00:18:02.378 }, 00:18:02.378 { 00:18:02.378 "name": null, 00:18:02.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.379 "is_configured": false, 00:18:02.379 "data_offset": 256, 00:18:02.379 "data_size": 7936 00:18:02.379 } 00:18:02.379 ] 00:18:02.379 }' 00:18:02.379 16:59:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.379 16:59:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.638 [2024-11-08 16:59:32.131843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.638 [2024-11-08 16:59:32.131940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.638 [2024-11-08 16:59:32.131976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:02.638 [2024-11-08 16:59:32.131992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.638 [2024-11-08 16:59:32.132265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.638 [2024-11-08 16:59:32.132289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.638 [2024-11-08 16:59:32.132373] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.638 [2024-11-08 16:59:32.132403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.638 [2024-11-08 16:59:32.132527] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:18:02.638 [2024-11-08 16:59:32.132539] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.638 [2024-11-08 16:59:32.132652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:02.638 [2024-11-08 16:59:32.132767] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:18:02.638 [2024-11-08 16:59:32.132785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:18:02.638 [2024-11-08 16:59:32.132872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.638 pt2 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.638 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.897 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.897 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.897 "name": "raid_bdev1", 00:18:02.897 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:02.897 "strip_size_kb": 0, 00:18:02.897 "state": "online", 00:18:02.897 "raid_level": "raid1", 00:18:02.897 "superblock": true, 00:18:02.897 "num_base_bdevs": 2, 00:18:02.897 "num_base_bdevs_discovered": 2, 00:18:02.897 "num_base_bdevs_operational": 2, 00:18:02.897 "base_bdevs_list": [ 00:18:02.897 { 00:18:02.897 "name": "pt1", 00:18:02.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.897 "is_configured": true, 00:18:02.897 "data_offset": 256, 00:18:02.897 "data_size": 7936 00:18:02.897 }, 00:18:02.897 { 00:18:02.897 "name": "pt2", 00:18:02.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.897 "is_configured": true, 00:18:02.897 "data_offset": 256, 00:18:02.897 "data_size": 7936 00:18:02.897 } 00:18:02.897 ] 00:18:02.897 }' 00:18:02.897 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.897 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.156 [2024-11-08 16:59:32.636149] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.156 "name": "raid_bdev1", 00:18:03.156 "aliases": [ 00:18:03.156 "54c26d28-f646-4927-b12d-02d1f440f8ef" 00:18:03.156 ], 00:18:03.156 "product_name": "Raid Volume", 00:18:03.156 "block_size": 4096, 00:18:03.156 "num_blocks": 7936, 00:18:03.156 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:03.156 "md_size": 32, 00:18:03.156 "md_interleave": false, 00:18:03.156 "dif_type": 0, 00:18:03.156 "assigned_rate_limits": { 00:18:03.156 "rw_ios_per_sec": 0, 00:18:03.156 "rw_mbytes_per_sec": 0, 00:18:03.156 "r_mbytes_per_sec": 0, 00:18:03.156 "w_mbytes_per_sec": 0 00:18:03.156 }, 00:18:03.156 "claimed": false, 00:18:03.156 "zoned": false, 00:18:03.156 "supported_io_types": { 00:18:03.156 "read": true, 00:18:03.156 "write": true, 00:18:03.156 "unmap": false, 00:18:03.156 "flush": false, 00:18:03.156 "reset": true, 00:18:03.156 "nvme_admin": false, 00:18:03.156 "nvme_io": false, 00:18:03.156 "nvme_io_md": false, 00:18:03.156 "write_zeroes": true, 00:18:03.156 "zcopy": false, 00:18:03.156 "get_zone_info": false, 00:18:03.156 "zone_management": false, 00:18:03.156 "zone_append": false, 00:18:03.156 "compare": false, 00:18:03.156 "compare_and_write": false, 00:18:03.156 "abort": false, 00:18:03.156 "seek_hole": false, 00:18:03.156 "seek_data": false, 00:18:03.156 "copy": false, 00:18:03.156 "nvme_iov_md": false 00:18:03.156 }, 00:18:03.156 "memory_domains": [ 00:18:03.156 { 00:18:03.156 "dma_device_id": "system", 00:18:03.156 "dma_device_type": 1 00:18:03.156 }, 00:18:03.156 { 00:18:03.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.156 "dma_device_type": 2 00:18:03.156 }, 00:18:03.156 { 00:18:03.156 "dma_device_id": "system", 00:18:03.156 "dma_device_type": 1 00:18:03.156 }, 00:18:03.156 { 00:18:03.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.156 "dma_device_type": 2 00:18:03.156 } 00:18:03.156 ], 00:18:03.156 "driver_specific": { 00:18:03.156 "raid": { 00:18:03.156 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:03.156 "strip_size_kb": 0, 00:18:03.156 "state": "online", 00:18:03.156 "raid_level": "raid1", 00:18:03.156 "superblock": true, 00:18:03.156 "num_base_bdevs": 2, 00:18:03.156 "num_base_bdevs_discovered": 2, 00:18:03.156 "num_base_bdevs_operational": 2, 00:18:03.156 "base_bdevs_list": [ 00:18:03.156 { 00:18:03.156 "name": "pt1", 00:18:03.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.156 "is_configured": true, 00:18:03.156 "data_offset": 256, 00:18:03.156 "data_size": 7936 00:18:03.156 }, 00:18:03.156 { 00:18:03.156 "name": "pt2", 00:18:03.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.156 "is_configured": true, 00:18:03.156 "data_offset": 256, 00:18:03.156 "data_size": 7936 00:18:03.156 } 00:18:03.156 ] 00:18:03.156 } 00:18:03.156 } 00:18:03.156 }' 00:18:03.156 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:03.415 pt2' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:03.415 [2024-11-08 16:59:32.892021] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 54c26d28-f646-4927-b12d-02d1f440f8ef '!=' 54c26d28-f646-4927-b12d-02d1f440f8ef ']' 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.415 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.675 [2024-11-08 16:59:32.943615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.675 16:59:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.675 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.675 "name": "raid_bdev1", 00:18:03.675 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:03.675 "strip_size_kb": 0, 00:18:03.675 "state": "online", 00:18:03.675 "raid_level": "raid1", 00:18:03.675 "superblock": true, 00:18:03.675 "num_base_bdevs": 2, 00:18:03.675 "num_base_bdevs_discovered": 1, 00:18:03.675 "num_base_bdevs_operational": 1, 00:18:03.675 "base_bdevs_list": [ 00:18:03.675 { 00:18:03.675 "name": null, 00:18:03.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.675 "is_configured": false, 00:18:03.675 "data_offset": 0, 00:18:03.675 "data_size": 7936 00:18:03.675 }, 00:18:03.675 { 00:18:03.675 "name": "pt2", 00:18:03.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.675 "is_configured": true, 00:18:03.675 "data_offset": 256, 00:18:03.675 "data_size": 7936 00:18:03.675 } 00:18:03.675 ] 00:18:03.675 }' 00:18:03.675 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.675 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.934 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.934 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.934 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.935 [2024-11-08 16:59:33.443254] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.935 [2024-11-08 16:59:33.443322] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.935 [2024-11-08 16:59:33.443421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.935 [2024-11-08 16:59:33.443486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.935 [2024-11-08 16:59:33.443499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:18:03.935 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.935 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.935 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:03.935 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.935 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.194 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.194 [2024-11-08 16:59:33.523142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.194 [2024-11-08 16:59:33.523375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.194 [2024-11-08 16:59:33.523444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:04.194 [2024-11-08 16:59:33.523509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.194 [2024-11-08 16:59:33.526039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.194 [2024-11-08 16:59:33.526149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.194 [2024-11-08 16:59:33.526279] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:04.194 [2024-11-08 16:59:33.526376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.194 [2024-11-08 16:59:33.526529] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:18:04.194 [2024-11-08 16:59:33.526581] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.195 [2024-11-08 16:59:33.526741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:04.195 [2024-11-08 16:59:33.526901] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:18:04.195 [2024-11-08 16:59:33.526967] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:18:04.195 [2024-11-08 16:59:33.527201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.195 pt2 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.195 "name": "raid_bdev1", 00:18:04.195 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:04.195 "strip_size_kb": 0, 00:18:04.195 "state": "online", 00:18:04.195 "raid_level": "raid1", 00:18:04.195 "superblock": true, 00:18:04.195 "num_base_bdevs": 2, 00:18:04.195 "num_base_bdevs_discovered": 1, 00:18:04.195 "num_base_bdevs_operational": 1, 00:18:04.195 "base_bdevs_list": [ 00:18:04.195 { 00:18:04.195 "name": null, 00:18:04.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.195 "is_configured": false, 00:18:04.195 "data_offset": 256, 00:18:04.195 "data_size": 7936 00:18:04.195 }, 00:18:04.195 { 00:18:04.195 "name": "pt2", 00:18:04.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.195 "is_configured": true, 00:18:04.195 "data_offset": 256, 00:18:04.195 "data_size": 7936 00:18:04.195 } 00:18:04.195 ] 00:18:04.195 }' 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.195 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.454 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.454 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.454 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.454 [2024-11-08 16:59:33.974842] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.454 [2024-11-08 16:59:33.974900] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.454 [2024-11-08 16:59:33.974999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.454 [2024-11-08 16:59:33.975062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.454 [2024-11-08 16:59:33.975080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:18:04.454 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.713 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.713 16:59:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:04.713 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.713 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.713 16:59:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.713 [2024-11-08 16:59:34.038832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.713 [2024-11-08 16:59:34.038948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.713 [2024-11-08 16:59:34.038979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:04.713 [2024-11-08 16:59:34.039000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.713 [2024-11-08 16:59:34.041457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.713 [2024-11-08 16:59:34.041526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.713 [2024-11-08 16:59:34.041610] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:04.713 [2024-11-08 16:59:34.041689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.713 [2024-11-08 16:59:34.041843] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:04.713 [2024-11-08 16:59:34.041866] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.713 [2024-11-08 16:59:34.041895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:18:04.713 [2024-11-08 16:59:34.041981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.713 [2024-11-08 16:59:34.042066] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:18:04.713 [2024-11-08 16:59:34.042124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.713 [2024-11-08 16:59:34.042280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:04.713 [2024-11-08 16:59:34.042401] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:18:04.713 [2024-11-08 16:59:34.042412] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:18:04.713 [2024-11-08 16:59:34.042522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.713 pt1 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.713 "name": "raid_bdev1", 00:18:04.713 "uuid": "54c26d28-f646-4927-b12d-02d1f440f8ef", 00:18:04.713 "strip_size_kb": 0, 00:18:04.713 "state": "online", 00:18:04.713 "raid_level": "raid1", 00:18:04.713 "superblock": true, 00:18:04.713 "num_base_bdevs": 2, 00:18:04.713 "num_base_bdevs_discovered": 1, 00:18:04.713 "num_base_bdevs_operational": 1, 00:18:04.713 "base_bdevs_list": [ 00:18:04.713 { 00:18:04.713 "name": null, 00:18:04.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.713 "is_configured": false, 00:18:04.713 "data_offset": 256, 00:18:04.713 "data_size": 7936 00:18:04.713 }, 00:18:04.713 { 00:18:04.713 "name": "pt2", 00:18:04.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.713 "is_configured": true, 00:18:04.713 "data_offset": 256, 00:18:04.713 "data_size": 7936 00:18:04.713 } 00:18:04.713 ] 00:18:04.713 }' 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.713 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.281 [2024-11-08 16:59:34.586146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 54c26d28-f646-4927-b12d-02d1f440f8ef '!=' 54c26d28-f646-4927-b12d-02d1f440f8ef ']' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97903 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97903 ']' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97903 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97903 00:18:05.281 killing process with pid 97903 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97903' 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97903 00:18:05.281 [2024-11-08 16:59:34.673276] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.281 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97903 00:18:05.281 [2024-11-08 16:59:34.673474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.281 [2024-11-08 16:59:34.673539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.281 [2024-11-08 16:59:34.673680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:18:05.281 [2024-11-08 16:59:34.698972] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.539 16:59:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:05.539 00:18:05.539 real 0m5.382s 00:18:05.539 user 0m8.732s 00:18:05.539 sys 0m1.208s 00:18:05.539 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.539 ************************************ 00:18:05.539 END TEST raid_superblock_test_md_separate 00:18:05.539 ************************************ 00:18:05.539 16:59:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.539 16:59:35 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:05.539 16:59:35 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:05.539 16:59:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:05.539 16:59:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.539 16:59:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.539 ************************************ 00:18:05.539 START TEST raid_rebuild_test_sb_md_separate 00:18:05.539 ************************************ 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98221 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.539 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98221 00:18:05.540 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98221 ']' 00:18:05.540 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.540 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.540 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.540 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.540 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.798 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.798 Zero copy mechanism will not be used. 00:18:05.798 [2024-11-08 16:59:35.116028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:05.798 [2024-11-08 16:59:35.116187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98221 ] 00:18:05.798 [2024-11-08 16:59:35.277503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.055 [2024-11-08 16:59:35.326206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.055 [2024-11-08 16:59:35.370167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.055 [2024-11-08 16:59:35.370207] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.623 BaseBdev1_malloc 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.623 16:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.623 [2024-11-08 16:59:36.003077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.623 [2024-11-08 16:59:36.003166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.623 [2024-11-08 16:59:36.003195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.623 [2024-11-08 16:59:36.003207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.624 [2024-11-08 16:59:36.005499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.624 [2024-11-08 16:59:36.005691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.624 BaseBdev1 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 BaseBdev2_malloc 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 [2024-11-08 16:59:36.043120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.624 [2024-11-08 16:59:36.043194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.624 [2024-11-08 16:59:36.043219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.624 [2024-11-08 16:59:36.043230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.624 [2024-11-08 16:59:36.045689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.624 [2024-11-08 16:59:36.045738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.624 BaseBdev2 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 spare_malloc 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 spare_delay 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 [2024-11-08 16:59:36.084707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.624 [2024-11-08 16:59:36.084867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.624 [2024-11-08 16:59:36.084897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.624 [2024-11-08 16:59:36.084909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.624 [2024-11-08 16:59:36.086910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.624 [2024-11-08 16:59:36.086950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.624 spare 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 [2024-11-08 16:59:36.096710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.624 [2024-11-08 16:59:36.098543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.624 [2024-11-08 16:59:36.098714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:06.624 [2024-11-08 16:59:36.098728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.624 [2024-11-08 16:59:36.098812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:06.624 [2024-11-08 16:59:36.098903] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:06.624 [2024-11-08 16:59:36.098912] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:06.624 [2024-11-08 16:59:36.098993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.624 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.884 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.884 "name": "raid_bdev1", 00:18:06.884 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:06.884 "strip_size_kb": 0, 00:18:06.884 "state": "online", 00:18:06.884 "raid_level": "raid1", 00:18:06.884 "superblock": true, 00:18:06.884 "num_base_bdevs": 2, 00:18:06.884 "num_base_bdevs_discovered": 2, 00:18:06.884 "num_base_bdevs_operational": 2, 00:18:06.884 "base_bdevs_list": [ 00:18:06.884 { 00:18:06.884 "name": "BaseBdev1", 00:18:06.884 "uuid": "55ae0d23-0fd9-5641-8ba7-165669ce9ddd", 00:18:06.884 "is_configured": true, 00:18:06.884 "data_offset": 256, 00:18:06.884 "data_size": 7936 00:18:06.884 }, 00:18:06.884 { 00:18:06.884 "name": "BaseBdev2", 00:18:06.884 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:06.884 "is_configured": true, 00:18:06.884 "data_offset": 256, 00:18:06.884 "data_size": 7936 00:18:06.884 } 00:18:06.884 ] 00:18:06.884 }' 00:18:06.884 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.884 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.143 [2024-11-08 16:59:36.576276] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:07.143 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.403 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.403 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:07.403 [2024-11-08 16:59:36.907591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:07.662 /dev/nbd0 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.662 1+0 records in 00:18:07.662 1+0 records out 00:18:07.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493584 s, 8.3 MB/s 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:07.662 16:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:08.296 7936+0 records in 00:18:08.296 7936+0 records out 00:18:08.296 32505856 bytes (33 MB, 31 MiB) copied, 0.794503 s, 40.9 MB/s 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.296 16:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:08.554 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:08.554 [2024-11-08 16:59:38.063970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.555 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.555 [2024-11-08 16:59:38.080071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.814 "name": "raid_bdev1", 00:18:08.814 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:08.814 "strip_size_kb": 0, 00:18:08.814 "state": "online", 00:18:08.814 "raid_level": "raid1", 00:18:08.814 "superblock": true, 00:18:08.814 "num_base_bdevs": 2, 00:18:08.814 "num_base_bdevs_discovered": 1, 00:18:08.814 "num_base_bdevs_operational": 1, 00:18:08.814 "base_bdevs_list": [ 00:18:08.814 { 00:18:08.814 "name": null, 00:18:08.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.814 "is_configured": false, 00:18:08.814 "data_offset": 0, 00:18:08.814 "data_size": 7936 00:18:08.814 }, 00:18:08.814 { 00:18:08.814 "name": "BaseBdev2", 00:18:08.814 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:08.814 "is_configured": true, 00:18:08.814 "data_offset": 256, 00:18:08.814 "data_size": 7936 00:18:08.814 } 00:18:08.814 ] 00:18:08.814 }' 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.814 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.074 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.074 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.074 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.074 [2024-11-08 16:59:38.535428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.074 [2024-11-08 16:59:38.537615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:18:09.074 [2024-11-08 16:59:38.539819] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.074 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.074 16:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.457 "name": "raid_bdev1", 00:18:10.457 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:10.457 "strip_size_kb": 0, 00:18:10.457 "state": "online", 00:18:10.457 "raid_level": "raid1", 00:18:10.457 "superblock": true, 00:18:10.457 "num_base_bdevs": 2, 00:18:10.457 "num_base_bdevs_discovered": 2, 00:18:10.457 "num_base_bdevs_operational": 2, 00:18:10.457 "process": { 00:18:10.457 "type": "rebuild", 00:18:10.457 "target": "spare", 00:18:10.457 "progress": { 00:18:10.457 "blocks": 2560, 00:18:10.457 "percent": 32 00:18:10.457 } 00:18:10.457 }, 00:18:10.457 "base_bdevs_list": [ 00:18:10.457 { 00:18:10.457 "name": "spare", 00:18:10.457 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:10.457 "is_configured": true, 00:18:10.457 "data_offset": 256, 00:18:10.457 "data_size": 7936 00:18:10.457 }, 00:18:10.457 { 00:18:10.457 "name": "BaseBdev2", 00:18:10.457 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:10.457 "is_configured": true, 00:18:10.457 "data_offset": 256, 00:18:10.457 "data_size": 7936 00:18:10.457 } 00:18:10.457 ] 00:18:10.457 }' 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.457 [2024-11-08 16:59:39.711471] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.457 [2024-11-08 16:59:39.747178] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.457 [2024-11-08 16:59:39.747374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.457 [2024-11-08 16:59:39.747398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.457 [2024-11-08 16:59:39.747407] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.457 "name": "raid_bdev1", 00:18:10.457 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:10.457 "strip_size_kb": 0, 00:18:10.457 "state": "online", 00:18:10.457 "raid_level": "raid1", 00:18:10.457 "superblock": true, 00:18:10.457 "num_base_bdevs": 2, 00:18:10.457 "num_base_bdevs_discovered": 1, 00:18:10.457 "num_base_bdevs_operational": 1, 00:18:10.457 "base_bdevs_list": [ 00:18:10.457 { 00:18:10.457 "name": null, 00:18:10.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.457 "is_configured": false, 00:18:10.457 "data_offset": 0, 00:18:10.457 "data_size": 7936 00:18:10.457 }, 00:18:10.457 { 00:18:10.457 "name": "BaseBdev2", 00:18:10.457 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:10.457 "is_configured": true, 00:18:10.457 "data_offset": 256, 00:18:10.457 "data_size": 7936 00:18:10.457 } 00:18:10.457 ] 00:18:10.457 }' 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.457 16:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.025 "name": "raid_bdev1", 00:18:11.025 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:11.025 "strip_size_kb": 0, 00:18:11.025 "state": "online", 00:18:11.025 "raid_level": "raid1", 00:18:11.025 "superblock": true, 00:18:11.025 "num_base_bdevs": 2, 00:18:11.025 "num_base_bdevs_discovered": 1, 00:18:11.025 "num_base_bdevs_operational": 1, 00:18:11.025 "base_bdevs_list": [ 00:18:11.025 { 00:18:11.025 "name": null, 00:18:11.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.025 "is_configured": false, 00:18:11.025 "data_offset": 0, 00:18:11.025 "data_size": 7936 00:18:11.025 }, 00:18:11.025 { 00:18:11.025 "name": "BaseBdev2", 00:18:11.025 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:11.025 "is_configured": true, 00:18:11.025 "data_offset": 256, 00:18:11.025 "data_size": 7936 00:18:11.025 } 00:18:11.025 ] 00:18:11.025 }' 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.025 [2024-11-08 16:59:40.367369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.025 [2024-11-08 16:59:40.369204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:18:11.025 [2024-11-08 16:59:40.371161] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.025 16:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.967 "name": "raid_bdev1", 00:18:11.967 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:11.967 "strip_size_kb": 0, 00:18:11.967 "state": "online", 00:18:11.967 "raid_level": "raid1", 00:18:11.967 "superblock": true, 00:18:11.967 "num_base_bdevs": 2, 00:18:11.967 "num_base_bdevs_discovered": 2, 00:18:11.967 "num_base_bdevs_operational": 2, 00:18:11.967 "process": { 00:18:11.967 "type": "rebuild", 00:18:11.967 "target": "spare", 00:18:11.967 "progress": { 00:18:11.967 "blocks": 2560, 00:18:11.967 "percent": 32 00:18:11.967 } 00:18:11.967 }, 00:18:11.967 "base_bdevs_list": [ 00:18:11.967 { 00:18:11.967 "name": "spare", 00:18:11.967 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:11.967 "is_configured": true, 00:18:11.967 "data_offset": 256, 00:18:11.967 "data_size": 7936 00:18:11.967 }, 00:18:11.967 { 00:18:11.967 "name": "BaseBdev2", 00:18:11.967 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:11.967 "is_configured": true, 00:18:11.967 "data_offset": 256, 00:18:11.967 "data_size": 7936 00:18:11.967 } 00:18:11.967 ] 00:18:11.967 }' 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.967 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:12.227 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=606 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.227 "name": "raid_bdev1", 00:18:12.227 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:12.227 "strip_size_kb": 0, 00:18:12.227 "state": "online", 00:18:12.227 "raid_level": "raid1", 00:18:12.227 "superblock": true, 00:18:12.227 "num_base_bdevs": 2, 00:18:12.227 "num_base_bdevs_discovered": 2, 00:18:12.227 "num_base_bdevs_operational": 2, 00:18:12.227 "process": { 00:18:12.227 "type": "rebuild", 00:18:12.227 "target": "spare", 00:18:12.227 "progress": { 00:18:12.227 "blocks": 2816, 00:18:12.227 "percent": 35 00:18:12.227 } 00:18:12.227 }, 00:18:12.227 "base_bdevs_list": [ 00:18:12.227 { 00:18:12.227 "name": "spare", 00:18:12.227 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:12.227 "is_configured": true, 00:18:12.227 "data_offset": 256, 00:18:12.227 "data_size": 7936 00:18:12.227 }, 00:18:12.227 { 00:18:12.227 "name": "BaseBdev2", 00:18:12.227 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:12.227 "is_configured": true, 00:18:12.227 "data_offset": 256, 00:18:12.227 "data_size": 7936 00:18:12.227 } 00:18:12.227 ] 00:18:12.227 }' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.227 16:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.165 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.424 "name": "raid_bdev1", 00:18:13.424 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:13.424 "strip_size_kb": 0, 00:18:13.424 "state": "online", 00:18:13.424 "raid_level": "raid1", 00:18:13.424 "superblock": true, 00:18:13.424 "num_base_bdevs": 2, 00:18:13.424 "num_base_bdevs_discovered": 2, 00:18:13.424 "num_base_bdevs_operational": 2, 00:18:13.424 "process": { 00:18:13.424 "type": "rebuild", 00:18:13.424 "target": "spare", 00:18:13.424 "progress": { 00:18:13.424 "blocks": 5888, 00:18:13.424 "percent": 74 00:18:13.424 } 00:18:13.424 }, 00:18:13.424 "base_bdevs_list": [ 00:18:13.424 { 00:18:13.424 "name": "spare", 00:18:13.424 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:13.424 "is_configured": true, 00:18:13.424 "data_offset": 256, 00:18:13.424 "data_size": 7936 00:18:13.424 }, 00:18:13.424 { 00:18:13.424 "name": "BaseBdev2", 00:18:13.424 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:13.424 "is_configured": true, 00:18:13.424 "data_offset": 256, 00:18:13.424 "data_size": 7936 00:18:13.424 } 00:18:13.424 ] 00:18:13.424 }' 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.424 16:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.992 [2024-11-08 16:59:43.484752] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:13.992 [2024-11-08 16:59:43.484846] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:13.992 [2024-11-08 16:59:43.484966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.558 "name": "raid_bdev1", 00:18:14.558 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:14.558 "strip_size_kb": 0, 00:18:14.558 "state": "online", 00:18:14.558 "raid_level": "raid1", 00:18:14.558 "superblock": true, 00:18:14.558 "num_base_bdevs": 2, 00:18:14.558 "num_base_bdevs_discovered": 2, 00:18:14.558 "num_base_bdevs_operational": 2, 00:18:14.558 "base_bdevs_list": [ 00:18:14.558 { 00:18:14.558 "name": "spare", 00:18:14.558 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:14.558 "is_configured": true, 00:18:14.558 "data_offset": 256, 00:18:14.558 "data_size": 7936 00:18:14.558 }, 00:18:14.558 { 00:18:14.558 "name": "BaseBdev2", 00:18:14.558 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:14.558 "is_configured": true, 00:18:14.558 "data_offset": 256, 00:18:14.558 "data_size": 7936 00:18:14.558 } 00:18:14.558 ] 00:18:14.558 }' 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:14.558 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.559 16:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.559 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.559 "name": "raid_bdev1", 00:18:14.559 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:14.559 "strip_size_kb": 0, 00:18:14.559 "state": "online", 00:18:14.559 "raid_level": "raid1", 00:18:14.559 "superblock": true, 00:18:14.559 "num_base_bdevs": 2, 00:18:14.559 "num_base_bdevs_discovered": 2, 00:18:14.559 "num_base_bdevs_operational": 2, 00:18:14.559 "base_bdevs_list": [ 00:18:14.559 { 00:18:14.559 "name": "spare", 00:18:14.559 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:14.559 "is_configured": true, 00:18:14.559 "data_offset": 256, 00:18:14.559 "data_size": 7936 00:18:14.559 }, 00:18:14.559 { 00:18:14.559 "name": "BaseBdev2", 00:18:14.559 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:14.559 "is_configured": true, 00:18:14.559 "data_offset": 256, 00:18:14.559 "data_size": 7936 00:18:14.559 } 00:18:14.559 ] 00:18:14.559 }' 00:18:14.559 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.559 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.559 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.818 "name": "raid_bdev1", 00:18:14.818 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:14.818 "strip_size_kb": 0, 00:18:14.818 "state": "online", 00:18:14.818 "raid_level": "raid1", 00:18:14.818 "superblock": true, 00:18:14.818 "num_base_bdevs": 2, 00:18:14.818 "num_base_bdevs_discovered": 2, 00:18:14.818 "num_base_bdevs_operational": 2, 00:18:14.818 "base_bdevs_list": [ 00:18:14.818 { 00:18:14.818 "name": "spare", 00:18:14.818 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:14.818 "is_configured": true, 00:18:14.818 "data_offset": 256, 00:18:14.818 "data_size": 7936 00:18:14.818 }, 00:18:14.818 { 00:18:14.818 "name": "BaseBdev2", 00:18:14.818 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:14.818 "is_configured": true, 00:18:14.818 "data_offset": 256, 00:18:14.818 "data_size": 7936 00:18:14.818 } 00:18:14.818 ] 00:18:14.818 }' 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.818 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.078 [2024-11-08 16:59:44.491826] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.078 [2024-11-08 16:59:44.491984] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.078 [2024-11-08 16:59:44.492099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.078 [2024-11-08 16:59:44.492206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.078 [2024-11-08 16:59:44.492227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.078 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:15.338 /dev/nbd0 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.338 1+0 records in 00:18:15.338 1+0 records out 00:18:15.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426831 s, 9.6 MB/s 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.338 16:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:15.624 /dev/nbd1 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.624 1+0 records in 00:18:15.624 1+0 records out 00:18:15.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406636 s, 10.1 MB/s 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.624 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.908 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.167 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.168 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.427 [2024-11-08 16:59:45.698073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.427 [2024-11-08 16:59:45.698154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.427 [2024-11-08 16:59:45.698175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:16.427 [2024-11-08 16:59:45.698189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.427 [2024-11-08 16:59:45.700257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.427 [2024-11-08 16:59:45.700300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.427 [2024-11-08 16:59:45.700361] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.427 [2024-11-08 16:59:45.700413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.427 [2024-11-08 16:59:45.700559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.427 spare 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.427 [2024-11-08 16:59:45.800453] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:16.427 [2024-11-08 16:59:45.800485] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.427 [2024-11-08 16:59:45.800599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:18:16.427 [2024-11-08 16:59:45.800733] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:16.427 [2024-11-08 16:59:45.800763] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:16.427 [2024-11-08 16:59:45.800863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.427 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.428 "name": "raid_bdev1", 00:18:16.428 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:16.428 "strip_size_kb": 0, 00:18:16.428 "state": "online", 00:18:16.428 "raid_level": "raid1", 00:18:16.428 "superblock": true, 00:18:16.428 "num_base_bdevs": 2, 00:18:16.428 "num_base_bdevs_discovered": 2, 00:18:16.428 "num_base_bdevs_operational": 2, 00:18:16.428 "base_bdevs_list": [ 00:18:16.428 { 00:18:16.428 "name": "spare", 00:18:16.428 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:16.428 "is_configured": true, 00:18:16.428 "data_offset": 256, 00:18:16.428 "data_size": 7936 00:18:16.428 }, 00:18:16.428 { 00:18:16.428 "name": "BaseBdev2", 00:18:16.428 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:16.428 "is_configured": true, 00:18:16.428 "data_offset": 256, 00:18:16.428 "data_size": 7936 00:18:16.428 } 00:18:16.428 ] 00:18:16.428 }' 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.428 16:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.997 "name": "raid_bdev1", 00:18:16.997 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:16.997 "strip_size_kb": 0, 00:18:16.997 "state": "online", 00:18:16.997 "raid_level": "raid1", 00:18:16.997 "superblock": true, 00:18:16.997 "num_base_bdevs": 2, 00:18:16.997 "num_base_bdevs_discovered": 2, 00:18:16.997 "num_base_bdevs_operational": 2, 00:18:16.997 "base_bdevs_list": [ 00:18:16.997 { 00:18:16.997 "name": "spare", 00:18:16.997 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:16.997 "is_configured": true, 00:18:16.997 "data_offset": 256, 00:18:16.997 "data_size": 7936 00:18:16.997 }, 00:18:16.997 { 00:18:16.997 "name": "BaseBdev2", 00:18:16.997 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:16.997 "is_configured": true, 00:18:16.997 "data_offset": 256, 00:18:16.997 "data_size": 7936 00:18:16.997 } 00:18:16.997 ] 00:18:16.997 }' 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.997 [2024-11-08 16:59:46.500741] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.997 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.998 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.257 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.257 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.257 "name": "raid_bdev1", 00:18:17.257 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:17.257 "strip_size_kb": 0, 00:18:17.257 "state": "online", 00:18:17.257 "raid_level": "raid1", 00:18:17.257 "superblock": true, 00:18:17.257 "num_base_bdevs": 2, 00:18:17.257 "num_base_bdevs_discovered": 1, 00:18:17.257 "num_base_bdevs_operational": 1, 00:18:17.257 "base_bdevs_list": [ 00:18:17.257 { 00:18:17.257 "name": null, 00:18:17.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.257 "is_configured": false, 00:18:17.257 "data_offset": 0, 00:18:17.257 "data_size": 7936 00:18:17.257 }, 00:18:17.257 { 00:18:17.257 "name": "BaseBdev2", 00:18:17.257 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:17.257 "is_configured": true, 00:18:17.257 "data_offset": 256, 00:18:17.257 "data_size": 7936 00:18:17.257 } 00:18:17.257 ] 00:18:17.257 }' 00:18:17.257 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.257 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.517 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.517 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.517 16:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.517 [2024-11-08 16:59:46.999915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.517 [2024-11-08 16:59:47.000141] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.517 [2024-11-08 16:59:47.000180] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:17.517 [2024-11-08 16:59:47.000230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.517 [2024-11-08 16:59:47.001855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:18:17.517 [2024-11-08 16:59:47.003760] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.517 16:59:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.517 16:59:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.898 "name": "raid_bdev1", 00:18:18.898 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:18.898 "strip_size_kb": 0, 00:18:18.898 "state": "online", 00:18:18.898 "raid_level": "raid1", 00:18:18.898 "superblock": true, 00:18:18.898 "num_base_bdevs": 2, 00:18:18.898 "num_base_bdevs_discovered": 2, 00:18:18.898 "num_base_bdevs_operational": 2, 00:18:18.898 "process": { 00:18:18.898 "type": "rebuild", 00:18:18.898 "target": "spare", 00:18:18.898 "progress": { 00:18:18.898 "blocks": 2560, 00:18:18.898 "percent": 32 00:18:18.898 } 00:18:18.898 }, 00:18:18.898 "base_bdevs_list": [ 00:18:18.898 { 00:18:18.898 "name": "spare", 00:18:18.898 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:18.898 "is_configured": true, 00:18:18.898 "data_offset": 256, 00:18:18.898 "data_size": 7936 00:18:18.898 }, 00:18:18.898 { 00:18:18.898 "name": "BaseBdev2", 00:18:18.898 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:18.898 "is_configured": true, 00:18:18.898 "data_offset": 256, 00:18:18.898 "data_size": 7936 00:18:18.898 } 00:18:18.898 ] 00:18:18.898 }' 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.898 [2024-11-08 16:59:48.158795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.898 [2024-11-08 16:59:48.208760] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.898 [2024-11-08 16:59:48.208872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.898 [2024-11-08 16:59:48.208889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.898 [2024-11-08 16:59:48.208896] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.898 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.898 "name": "raid_bdev1", 00:18:18.898 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:18.898 "strip_size_kb": 0, 00:18:18.898 "state": "online", 00:18:18.898 "raid_level": "raid1", 00:18:18.898 "superblock": true, 00:18:18.898 "num_base_bdevs": 2, 00:18:18.898 "num_base_bdevs_discovered": 1, 00:18:18.898 "num_base_bdevs_operational": 1, 00:18:18.898 "base_bdevs_list": [ 00:18:18.898 { 00:18:18.898 "name": null, 00:18:18.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.898 "is_configured": false, 00:18:18.898 "data_offset": 0, 00:18:18.898 "data_size": 7936 00:18:18.899 }, 00:18:18.899 { 00:18:18.899 "name": "BaseBdev2", 00:18:18.899 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:18.899 "is_configured": true, 00:18:18.899 "data_offset": 256, 00:18:18.899 "data_size": 7936 00:18:18.899 } 00:18:18.899 ] 00:18:18.899 }' 00:18:18.899 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.899 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.468 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.468 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.468 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.468 [2024-11-08 16:59:48.719257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.468 [2024-11-08 16:59:48.719384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.468 [2024-11-08 16:59:48.719416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:19.468 [2024-11-08 16:59:48.719428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.468 [2024-11-08 16:59:48.719696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.468 [2024-11-08 16:59:48.719721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.468 [2024-11-08 16:59:48.719799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.468 [2024-11-08 16:59:48.719813] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.468 [2024-11-08 16:59:48.719830] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:19.468 [2024-11-08 16:59:48.719859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.468 [2024-11-08 16:59:48.721581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:19.468 [2024-11-08 16:59:48.723675] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.468 spare 00:18:19.468 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.468 16:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.411 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.411 "name": "raid_bdev1", 00:18:20.411 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:20.411 "strip_size_kb": 0, 00:18:20.411 "state": "online", 00:18:20.411 "raid_level": "raid1", 00:18:20.411 "superblock": true, 00:18:20.411 "num_base_bdevs": 2, 00:18:20.411 "num_base_bdevs_discovered": 2, 00:18:20.411 "num_base_bdevs_operational": 2, 00:18:20.411 "process": { 00:18:20.411 "type": "rebuild", 00:18:20.411 "target": "spare", 00:18:20.411 "progress": { 00:18:20.411 "blocks": 2560, 00:18:20.411 "percent": 32 00:18:20.411 } 00:18:20.411 }, 00:18:20.411 "base_bdevs_list": [ 00:18:20.411 { 00:18:20.411 "name": "spare", 00:18:20.411 "uuid": "9585b0f5-4dd4-5166-aa69-3f21afabdc67", 00:18:20.411 "is_configured": true, 00:18:20.411 "data_offset": 256, 00:18:20.411 "data_size": 7936 00:18:20.411 }, 00:18:20.411 { 00:18:20.411 "name": "BaseBdev2", 00:18:20.411 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:20.411 "is_configured": true, 00:18:20.411 "data_offset": 256, 00:18:20.412 "data_size": 7936 00:18:20.412 } 00:18:20.412 ] 00:18:20.412 }' 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.412 [2024-11-08 16:59:49.882861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.412 [2024-11-08 16:59:49.928876] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.412 [2024-11-08 16:59:49.928973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.412 [2024-11-08 16:59:49.928989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.412 [2024-11-08 16:59:49.928999] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.412 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.671 "name": "raid_bdev1", 00:18:20.671 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:20.671 "strip_size_kb": 0, 00:18:20.671 "state": "online", 00:18:20.671 "raid_level": "raid1", 00:18:20.671 "superblock": true, 00:18:20.671 "num_base_bdevs": 2, 00:18:20.671 "num_base_bdevs_discovered": 1, 00:18:20.671 "num_base_bdevs_operational": 1, 00:18:20.671 "base_bdevs_list": [ 00:18:20.671 { 00:18:20.671 "name": null, 00:18:20.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.671 "is_configured": false, 00:18:20.671 "data_offset": 0, 00:18:20.671 "data_size": 7936 00:18:20.671 }, 00:18:20.671 { 00:18:20.671 "name": "BaseBdev2", 00:18:20.671 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:20.671 "is_configured": true, 00:18:20.671 "data_offset": 256, 00:18:20.671 "data_size": 7936 00:18:20.671 } 00:18:20.671 ] 00:18:20.671 }' 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.671 16:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.929 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.192 "name": "raid_bdev1", 00:18:21.192 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:21.192 "strip_size_kb": 0, 00:18:21.192 "state": "online", 00:18:21.192 "raid_level": "raid1", 00:18:21.192 "superblock": true, 00:18:21.192 "num_base_bdevs": 2, 00:18:21.192 "num_base_bdevs_discovered": 1, 00:18:21.192 "num_base_bdevs_operational": 1, 00:18:21.192 "base_bdevs_list": [ 00:18:21.192 { 00:18:21.192 "name": null, 00:18:21.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.192 "is_configured": false, 00:18:21.192 "data_offset": 0, 00:18:21.192 "data_size": 7936 00:18:21.192 }, 00:18:21.192 { 00:18:21.192 "name": "BaseBdev2", 00:18:21.192 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:21.192 "is_configured": true, 00:18:21.192 "data_offset": 256, 00:18:21.192 "data_size": 7936 00:18:21.192 } 00:18:21.192 ] 00:18:21.192 }' 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.192 [2024-11-08 16:59:50.575099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:21.192 [2024-11-08 16:59:50.575205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.192 [2024-11-08 16:59:50.575229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:21.192 [2024-11-08 16:59:50.575242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.192 [2024-11-08 16:59:50.575469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.192 [2024-11-08 16:59:50.575492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:21.192 [2024-11-08 16:59:50.575550] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:21.192 [2024-11-08 16:59:50.575574] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.192 [2024-11-08 16:59:50.575582] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:21.192 [2024-11-08 16:59:50.575597] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:21.192 BaseBdev1 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.192 16:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.131 "name": "raid_bdev1", 00:18:22.131 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:22.131 "strip_size_kb": 0, 00:18:22.131 "state": "online", 00:18:22.131 "raid_level": "raid1", 00:18:22.131 "superblock": true, 00:18:22.131 "num_base_bdevs": 2, 00:18:22.131 "num_base_bdevs_discovered": 1, 00:18:22.131 "num_base_bdevs_operational": 1, 00:18:22.131 "base_bdevs_list": [ 00:18:22.131 { 00:18:22.131 "name": null, 00:18:22.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.131 "is_configured": false, 00:18:22.131 "data_offset": 0, 00:18:22.131 "data_size": 7936 00:18:22.131 }, 00:18:22.131 { 00:18:22.131 "name": "BaseBdev2", 00:18:22.131 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:22.131 "is_configured": true, 00:18:22.131 "data_offset": 256, 00:18:22.131 "data_size": 7936 00:18:22.131 } 00:18:22.131 ] 00:18:22.131 }' 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.131 16:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.700 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.700 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.700 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.700 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.700 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.700 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.701 "name": "raid_bdev1", 00:18:22.701 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:22.701 "strip_size_kb": 0, 00:18:22.701 "state": "online", 00:18:22.701 "raid_level": "raid1", 00:18:22.701 "superblock": true, 00:18:22.701 "num_base_bdevs": 2, 00:18:22.701 "num_base_bdevs_discovered": 1, 00:18:22.701 "num_base_bdevs_operational": 1, 00:18:22.701 "base_bdevs_list": [ 00:18:22.701 { 00:18:22.701 "name": null, 00:18:22.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.701 "is_configured": false, 00:18:22.701 "data_offset": 0, 00:18:22.701 "data_size": 7936 00:18:22.701 }, 00:18:22.701 { 00:18:22.701 "name": "BaseBdev2", 00:18:22.701 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:22.701 "is_configured": true, 00:18:22.701 "data_offset": 256, 00:18:22.701 "data_size": 7936 00:18:22.701 } 00:18:22.701 ] 00:18:22.701 }' 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.701 [2024-11-08 16:59:52.192422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.701 [2024-11-08 16:59:52.192627] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.701 [2024-11-08 16:59:52.192640] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:22.701 request: 00:18:22.701 { 00:18:22.701 "base_bdev": "BaseBdev1", 00:18:22.701 "raid_bdev": "raid_bdev1", 00:18:22.701 "method": "bdev_raid_add_base_bdev", 00:18:22.701 "req_id": 1 00:18:22.701 } 00:18:22.701 Got JSON-RPC error response 00:18:22.701 response: 00:18:22.701 { 00:18:22.701 "code": -22, 00:18:22.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:22.701 } 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.701 16:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.082 "name": "raid_bdev1", 00:18:24.082 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:24.082 "strip_size_kb": 0, 00:18:24.082 "state": "online", 00:18:24.082 "raid_level": "raid1", 00:18:24.082 "superblock": true, 00:18:24.082 "num_base_bdevs": 2, 00:18:24.082 "num_base_bdevs_discovered": 1, 00:18:24.082 "num_base_bdevs_operational": 1, 00:18:24.082 "base_bdevs_list": [ 00:18:24.082 { 00:18:24.082 "name": null, 00:18:24.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.082 "is_configured": false, 00:18:24.082 "data_offset": 0, 00:18:24.082 "data_size": 7936 00:18:24.082 }, 00:18:24.082 { 00:18:24.082 "name": "BaseBdev2", 00:18:24.082 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:24.082 "is_configured": true, 00:18:24.082 "data_offset": 256, 00:18:24.082 "data_size": 7936 00:18:24.082 } 00:18:24.082 ] 00:18:24.082 }' 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.082 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.342 "name": "raid_bdev1", 00:18:24.342 "uuid": "f7ac880d-45d3-4dc3-8c88-58ba5ffcd595", 00:18:24.342 "strip_size_kb": 0, 00:18:24.342 "state": "online", 00:18:24.342 "raid_level": "raid1", 00:18:24.342 "superblock": true, 00:18:24.342 "num_base_bdevs": 2, 00:18:24.342 "num_base_bdevs_discovered": 1, 00:18:24.342 "num_base_bdevs_operational": 1, 00:18:24.342 "base_bdevs_list": [ 00:18:24.342 { 00:18:24.342 "name": null, 00:18:24.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.342 "is_configured": false, 00:18:24.342 "data_offset": 0, 00:18:24.342 "data_size": 7936 00:18:24.342 }, 00:18:24.342 { 00:18:24.342 "name": "BaseBdev2", 00:18:24.342 "uuid": "83a6034a-07ad-5298-99c7-886c1bd8cbd5", 00:18:24.342 "is_configured": true, 00:18:24.342 "data_offset": 256, 00:18:24.342 "data_size": 7936 00:18:24.342 } 00:18:24.342 ] 00:18:24.342 }' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98221 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98221 ']' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98221 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98221 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98221' 00:18:24.342 killing process with pid 98221 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98221 00:18:24.342 Received shutdown signal, test time was about 60.000000 seconds 00:18:24.342 00:18:24.342 Latency(us) 00:18:24.342 [2024-11-08T16:59:53.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.342 [2024-11-08T16:59:53.870Z] =================================================================================================================== 00:18:24.342 [2024-11-08T16:59:53.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.342 [2024-11-08 16:59:53.838100] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.342 16:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98221 00:18:24.342 [2024-11-08 16:59:53.838254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.342 [2024-11-08 16:59:53.838313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.342 [2024-11-08 16:59:53.838329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:24.602 [2024-11-08 16:59:53.872736] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.602 16:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:24.602 00:18:24.602 real 0m19.091s 00:18:24.602 user 0m25.306s 00:18:24.602 sys 0m2.847s 00:18:24.602 16:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.602 16:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.602 ************************************ 00:18:24.602 END TEST raid_rebuild_test_sb_md_separate 00:18:24.602 ************************************ 00:18:24.861 16:59:54 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:24.861 16:59:54 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:24.861 16:59:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:24.861 16:59:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.861 16:59:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.861 ************************************ 00:18:24.861 START TEST raid_state_function_test_sb_md_interleaved 00:18:24.861 ************************************ 00:18:24.861 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:24.861 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:24.862 Process raid pid: 98915 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98915 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98915' 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98915 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98915 ']' 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.862 16:59:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.862 [2024-11-08 16:59:54.285995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:24.862 [2024-11-08 16:59:54.286129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.122 [2024-11-08 16:59:54.446070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.122 [2024-11-08 16:59:54.497481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.122 [2024-11-08 16:59:54.540059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.122 [2024-11-08 16:59:54.540097] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.690 [2024-11-08 16:59:55.157732] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.690 [2024-11-08 16:59:55.157787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.690 [2024-11-08 16:59:55.157799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.690 [2024-11-08 16:59:55.157809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.690 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.950 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.950 "name": "Existed_Raid", 00:18:25.950 "uuid": "6ed1407a-f5fa-40eb-98ad-897c87ca4339", 00:18:25.950 "strip_size_kb": 0, 00:18:25.950 "state": "configuring", 00:18:25.950 "raid_level": "raid1", 00:18:25.950 "superblock": true, 00:18:25.950 "num_base_bdevs": 2, 00:18:25.950 "num_base_bdevs_discovered": 0, 00:18:25.950 "num_base_bdevs_operational": 2, 00:18:25.950 "base_bdevs_list": [ 00:18:25.950 { 00:18:25.950 "name": "BaseBdev1", 00:18:25.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.950 "is_configured": false, 00:18:25.950 "data_offset": 0, 00:18:25.950 "data_size": 0 00:18:25.950 }, 00:18:25.950 { 00:18:25.950 "name": "BaseBdev2", 00:18:25.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.950 "is_configured": false, 00:18:25.950 "data_offset": 0, 00:18:25.950 "data_size": 0 00:18:25.950 } 00:18:25.950 ] 00:18:25.950 }' 00:18:25.950 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.950 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.209 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.209 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.209 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.209 [2024-11-08 16:59:55.632804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.209 [2024-11-08 16:59:55.632941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:18:26.209 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.210 [2024-11-08 16:59:55.644804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.210 [2024-11-08 16:59:55.644886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.210 [2024-11-08 16:59:55.644915] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.210 [2024-11-08 16:59:55.644937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.210 [2024-11-08 16:59:55.665826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.210 BaseBdev1 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.210 [ 00:18:26.210 { 00:18:26.210 "name": "BaseBdev1", 00:18:26.210 "aliases": [ 00:18:26.210 "6bb82091-2bdd-4c4d-b474-770848435c11" 00:18:26.210 ], 00:18:26.210 "product_name": "Malloc disk", 00:18:26.210 "block_size": 4128, 00:18:26.210 "num_blocks": 8192, 00:18:26.210 "uuid": "6bb82091-2bdd-4c4d-b474-770848435c11", 00:18:26.210 "md_size": 32, 00:18:26.210 "md_interleave": true, 00:18:26.210 "dif_type": 0, 00:18:26.210 "assigned_rate_limits": { 00:18:26.210 "rw_ios_per_sec": 0, 00:18:26.210 "rw_mbytes_per_sec": 0, 00:18:26.210 "r_mbytes_per_sec": 0, 00:18:26.210 "w_mbytes_per_sec": 0 00:18:26.210 }, 00:18:26.210 "claimed": true, 00:18:26.210 "claim_type": "exclusive_write", 00:18:26.210 "zoned": false, 00:18:26.210 "supported_io_types": { 00:18:26.210 "read": true, 00:18:26.210 "write": true, 00:18:26.210 "unmap": true, 00:18:26.210 "flush": true, 00:18:26.210 "reset": true, 00:18:26.210 "nvme_admin": false, 00:18:26.210 "nvme_io": false, 00:18:26.210 "nvme_io_md": false, 00:18:26.210 "write_zeroes": true, 00:18:26.210 "zcopy": true, 00:18:26.210 "get_zone_info": false, 00:18:26.210 "zone_management": false, 00:18:26.210 "zone_append": false, 00:18:26.210 "compare": false, 00:18:26.210 "compare_and_write": false, 00:18:26.210 "abort": true, 00:18:26.210 "seek_hole": false, 00:18:26.210 "seek_data": false, 00:18:26.210 "copy": true, 00:18:26.210 "nvme_iov_md": false 00:18:26.210 }, 00:18:26.210 "memory_domains": [ 00:18:26.210 { 00:18:26.210 "dma_device_id": "system", 00:18:26.210 "dma_device_type": 1 00:18:26.210 }, 00:18:26.210 { 00:18:26.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.210 "dma_device_type": 2 00:18:26.210 } 00:18:26.210 ], 00:18:26.210 "driver_specific": {} 00:18:26.210 } 00:18:26.210 ] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.210 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.470 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.470 "name": "Existed_Raid", 00:18:26.470 "uuid": "e5d86a93-cd1a-42c0-8221-3e23f331ec4c", 00:18:26.470 "strip_size_kb": 0, 00:18:26.470 "state": "configuring", 00:18:26.470 "raid_level": "raid1", 00:18:26.470 "superblock": true, 00:18:26.470 "num_base_bdevs": 2, 00:18:26.470 "num_base_bdevs_discovered": 1, 00:18:26.470 "num_base_bdevs_operational": 2, 00:18:26.470 "base_bdevs_list": [ 00:18:26.470 { 00:18:26.470 "name": "BaseBdev1", 00:18:26.470 "uuid": "6bb82091-2bdd-4c4d-b474-770848435c11", 00:18:26.470 "is_configured": true, 00:18:26.470 "data_offset": 256, 00:18:26.470 "data_size": 7936 00:18:26.470 }, 00:18:26.470 { 00:18:26.470 "name": "BaseBdev2", 00:18:26.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.470 "is_configured": false, 00:18:26.470 "data_offset": 0, 00:18:26.470 "data_size": 0 00:18:26.470 } 00:18:26.470 ] 00:18:26.470 }' 00:18:26.470 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.470 16:59:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.731 [2024-11-08 16:59:56.189044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.731 [2024-11-08 16:59:56.189108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.731 [2024-11-08 16:59:56.201093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.731 [2024-11-08 16:59:56.202938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.731 [2024-11-08 16:59:56.203032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.731 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.991 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.991 "name": "Existed_Raid", 00:18:26.991 "uuid": "089af7f4-e1c9-4e2b-95eb-19f67dd934b1", 00:18:26.991 "strip_size_kb": 0, 00:18:26.991 "state": "configuring", 00:18:26.991 "raid_level": "raid1", 00:18:26.991 "superblock": true, 00:18:26.991 "num_base_bdevs": 2, 00:18:26.991 "num_base_bdevs_discovered": 1, 00:18:26.991 "num_base_bdevs_operational": 2, 00:18:26.991 "base_bdevs_list": [ 00:18:26.991 { 00:18:26.991 "name": "BaseBdev1", 00:18:26.991 "uuid": "6bb82091-2bdd-4c4d-b474-770848435c11", 00:18:26.991 "is_configured": true, 00:18:26.991 "data_offset": 256, 00:18:26.991 "data_size": 7936 00:18:26.991 }, 00:18:26.991 { 00:18:26.991 "name": "BaseBdev2", 00:18:26.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.991 "is_configured": false, 00:18:26.991 "data_offset": 0, 00:18:26.991 "data_size": 0 00:18:26.991 } 00:18:26.991 ] 00:18:26.991 }' 00:18:26.991 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.991 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.251 [2024-11-08 16:59:56.672717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.251 [2024-11-08 16:59:56.673044] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:18:27.251 [2024-11-08 16:59:56.673120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.251 [2024-11-08 16:59:56.673291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:27.251 [2024-11-08 16:59:56.673422] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:18:27.251 [2024-11-08 16:59:56.673479] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:18:27.251 [2024-11-08 16:59:56.673618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.251 BaseBdev2 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.251 [ 00:18:27.251 { 00:18:27.251 "name": "BaseBdev2", 00:18:27.251 "aliases": [ 00:18:27.251 "3e68cb67-4ee8-4bb5-a013-cf3648770018" 00:18:27.251 ], 00:18:27.251 "product_name": "Malloc disk", 00:18:27.251 "block_size": 4128, 00:18:27.251 "num_blocks": 8192, 00:18:27.251 "uuid": "3e68cb67-4ee8-4bb5-a013-cf3648770018", 00:18:27.251 "md_size": 32, 00:18:27.251 "md_interleave": true, 00:18:27.251 "dif_type": 0, 00:18:27.251 "assigned_rate_limits": { 00:18:27.251 "rw_ios_per_sec": 0, 00:18:27.251 "rw_mbytes_per_sec": 0, 00:18:27.251 "r_mbytes_per_sec": 0, 00:18:27.251 "w_mbytes_per_sec": 0 00:18:27.251 }, 00:18:27.251 "claimed": true, 00:18:27.251 "claim_type": "exclusive_write", 00:18:27.251 "zoned": false, 00:18:27.251 "supported_io_types": { 00:18:27.251 "read": true, 00:18:27.251 "write": true, 00:18:27.251 "unmap": true, 00:18:27.251 "flush": true, 00:18:27.251 "reset": true, 00:18:27.251 "nvme_admin": false, 00:18:27.251 "nvme_io": false, 00:18:27.251 "nvme_io_md": false, 00:18:27.251 "write_zeroes": true, 00:18:27.251 "zcopy": true, 00:18:27.251 "get_zone_info": false, 00:18:27.251 "zone_management": false, 00:18:27.251 "zone_append": false, 00:18:27.251 "compare": false, 00:18:27.251 "compare_and_write": false, 00:18:27.251 "abort": true, 00:18:27.251 "seek_hole": false, 00:18:27.251 "seek_data": false, 00:18:27.251 "copy": true, 00:18:27.251 "nvme_iov_md": false 00:18:27.251 }, 00:18:27.251 "memory_domains": [ 00:18:27.251 { 00:18:27.251 "dma_device_id": "system", 00:18:27.251 "dma_device_type": 1 00:18:27.251 }, 00:18:27.251 { 00:18:27.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.251 "dma_device_type": 2 00:18:27.251 } 00:18:27.251 ], 00:18:27.251 "driver_specific": {} 00:18:27.251 } 00:18:27.251 ] 00:18:27.251 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.252 "name": "Existed_Raid", 00:18:27.252 "uuid": "089af7f4-e1c9-4e2b-95eb-19f67dd934b1", 00:18:27.252 "strip_size_kb": 0, 00:18:27.252 "state": "online", 00:18:27.252 "raid_level": "raid1", 00:18:27.252 "superblock": true, 00:18:27.252 "num_base_bdevs": 2, 00:18:27.252 "num_base_bdevs_discovered": 2, 00:18:27.252 "num_base_bdevs_operational": 2, 00:18:27.252 "base_bdevs_list": [ 00:18:27.252 { 00:18:27.252 "name": "BaseBdev1", 00:18:27.252 "uuid": "6bb82091-2bdd-4c4d-b474-770848435c11", 00:18:27.252 "is_configured": true, 00:18:27.252 "data_offset": 256, 00:18:27.252 "data_size": 7936 00:18:27.252 }, 00:18:27.252 { 00:18:27.252 "name": "BaseBdev2", 00:18:27.252 "uuid": "3e68cb67-4ee8-4bb5-a013-cf3648770018", 00:18:27.252 "is_configured": true, 00:18:27.252 "data_offset": 256, 00:18:27.252 "data_size": 7936 00:18:27.252 } 00:18:27.252 ] 00:18:27.252 }' 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.252 16:59:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.836 [2024-11-08 16:59:57.136367] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.836 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.836 "name": "Existed_Raid", 00:18:27.836 "aliases": [ 00:18:27.836 "089af7f4-e1c9-4e2b-95eb-19f67dd934b1" 00:18:27.836 ], 00:18:27.836 "product_name": "Raid Volume", 00:18:27.836 "block_size": 4128, 00:18:27.836 "num_blocks": 7936, 00:18:27.836 "uuid": "089af7f4-e1c9-4e2b-95eb-19f67dd934b1", 00:18:27.836 "md_size": 32, 00:18:27.836 "md_interleave": true, 00:18:27.836 "dif_type": 0, 00:18:27.836 "assigned_rate_limits": { 00:18:27.836 "rw_ios_per_sec": 0, 00:18:27.836 "rw_mbytes_per_sec": 0, 00:18:27.836 "r_mbytes_per_sec": 0, 00:18:27.836 "w_mbytes_per_sec": 0 00:18:27.836 }, 00:18:27.836 "claimed": false, 00:18:27.836 "zoned": false, 00:18:27.836 "supported_io_types": { 00:18:27.836 "read": true, 00:18:27.836 "write": true, 00:18:27.837 "unmap": false, 00:18:27.837 "flush": false, 00:18:27.837 "reset": true, 00:18:27.837 "nvme_admin": false, 00:18:27.837 "nvme_io": false, 00:18:27.837 "nvme_io_md": false, 00:18:27.837 "write_zeroes": true, 00:18:27.837 "zcopy": false, 00:18:27.837 "get_zone_info": false, 00:18:27.837 "zone_management": false, 00:18:27.837 "zone_append": false, 00:18:27.837 "compare": false, 00:18:27.837 "compare_and_write": false, 00:18:27.837 "abort": false, 00:18:27.837 "seek_hole": false, 00:18:27.837 "seek_data": false, 00:18:27.837 "copy": false, 00:18:27.837 "nvme_iov_md": false 00:18:27.837 }, 00:18:27.837 "memory_domains": [ 00:18:27.837 { 00:18:27.837 "dma_device_id": "system", 00:18:27.837 "dma_device_type": 1 00:18:27.837 }, 00:18:27.837 { 00:18:27.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.837 "dma_device_type": 2 00:18:27.837 }, 00:18:27.837 { 00:18:27.837 "dma_device_id": "system", 00:18:27.837 "dma_device_type": 1 00:18:27.837 }, 00:18:27.837 { 00:18:27.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.837 "dma_device_type": 2 00:18:27.837 } 00:18:27.837 ], 00:18:27.837 "driver_specific": { 00:18:27.837 "raid": { 00:18:27.837 "uuid": "089af7f4-e1c9-4e2b-95eb-19f67dd934b1", 00:18:27.837 "strip_size_kb": 0, 00:18:27.837 "state": "online", 00:18:27.837 "raid_level": "raid1", 00:18:27.837 "superblock": true, 00:18:27.837 "num_base_bdevs": 2, 00:18:27.837 "num_base_bdevs_discovered": 2, 00:18:27.837 "num_base_bdevs_operational": 2, 00:18:27.837 "base_bdevs_list": [ 00:18:27.837 { 00:18:27.837 "name": "BaseBdev1", 00:18:27.837 "uuid": "6bb82091-2bdd-4c4d-b474-770848435c11", 00:18:27.837 "is_configured": true, 00:18:27.837 "data_offset": 256, 00:18:27.837 "data_size": 7936 00:18:27.837 }, 00:18:27.837 { 00:18:27.837 "name": "BaseBdev2", 00:18:27.837 "uuid": "3e68cb67-4ee8-4bb5-a013-cf3648770018", 00:18:27.837 "is_configured": true, 00:18:27.837 "data_offset": 256, 00:18:27.837 "data_size": 7936 00:18:27.837 } 00:18:27.837 ] 00:18:27.837 } 00:18:27.837 } 00:18:27.837 }' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:27.837 BaseBdev2' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.837 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.096 [2024-11-08 16:59:57.383681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.096 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.097 "name": "Existed_Raid", 00:18:28.097 "uuid": "089af7f4-e1c9-4e2b-95eb-19f67dd934b1", 00:18:28.097 "strip_size_kb": 0, 00:18:28.097 "state": "online", 00:18:28.097 "raid_level": "raid1", 00:18:28.097 "superblock": true, 00:18:28.097 "num_base_bdevs": 2, 00:18:28.097 "num_base_bdevs_discovered": 1, 00:18:28.097 "num_base_bdevs_operational": 1, 00:18:28.097 "base_bdevs_list": [ 00:18:28.097 { 00:18:28.097 "name": null, 00:18:28.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.097 "is_configured": false, 00:18:28.097 "data_offset": 0, 00:18:28.097 "data_size": 7936 00:18:28.097 }, 00:18:28.097 { 00:18:28.097 "name": "BaseBdev2", 00:18:28.097 "uuid": "3e68cb67-4ee8-4bb5-a013-cf3648770018", 00:18:28.097 "is_configured": true, 00:18:28.097 "data_offset": 256, 00:18:28.097 "data_size": 7936 00:18:28.097 } 00:18:28.097 ] 00:18:28.097 }' 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.097 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.665 [2024-11-08 16:59:57.930884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:28.665 [2024-11-08 16:59:57.931002] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.665 [2024-11-08 16:59:57.943541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.665 [2024-11-08 16:59:57.943592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.665 [2024-11-08 16:59:57.943614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:28.665 16:59:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98915 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98915 ']' 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98915 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98915 00:18:28.666 killing process with pid 98915 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98915' 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98915 00:18:28.666 [2024-11-08 16:59:58.043108] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.666 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98915 00:18:28.666 [2024-11-08 16:59:58.044155] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.924 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:28.924 00:18:28.924 real 0m4.103s 00:18:28.924 user 0m6.421s 00:18:28.924 sys 0m0.936s 00:18:28.924 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.924 16:59:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.925 ************************************ 00:18:28.925 END TEST raid_state_function_test_sb_md_interleaved 00:18:28.925 ************************************ 00:18:28.925 16:59:58 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:28.925 16:59:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:28.925 16:59:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.925 16:59:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.925 ************************************ 00:18:28.925 START TEST raid_superblock_test_md_interleaved 00:18:28.925 ************************************ 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99151 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:28.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99151 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99151 ']' 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.925 16:59:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.183 [2024-11-08 16:59:58.472256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:29.183 [2024-11-08 16:59:58.472542] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99151 ] 00:18:29.183 [2024-11-08 16:59:58.641553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.183 [2024-11-08 16:59:58.690288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.442 [2024-11-08 16:59:58.732392] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.442 [2024-11-08 16:59:58.732517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 malloc1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 [2024-11-08 16:59:59.363221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.010 [2024-11-08 16:59:59.363312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.010 [2024-11-08 16:59:59.363354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.010 [2024-11-08 16:59:59.363368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.010 [2024-11-08 16:59:59.365503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.010 [2024-11-08 16:59:59.365549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.010 pt1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 malloc2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 [2024-11-08 16:59:59.407069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.010 [2024-11-08 16:59:59.407243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.010 [2024-11-08 16:59:59.407285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.010 [2024-11-08 16:59:59.407331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.010 [2024-11-08 16:59:59.409573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.010 [2024-11-08 16:59:59.409674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.010 pt2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 [2024-11-08 16:59:59.419068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.010 [2024-11-08 16:59:59.421048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.010 [2024-11-08 16:59:59.421255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:30.010 [2024-11-08 16:59:59.421314] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:30.010 [2024-11-08 16:59:59.421431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:30.010 [2024-11-08 16:59:59.421552] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:30.010 [2024-11-08 16:59:59.421598] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:30.010 [2024-11-08 16:59:59.421730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.011 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.011 "name": "raid_bdev1", 00:18:30.011 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:30.011 "strip_size_kb": 0, 00:18:30.011 "state": "online", 00:18:30.011 "raid_level": "raid1", 00:18:30.011 "superblock": true, 00:18:30.011 "num_base_bdevs": 2, 00:18:30.011 "num_base_bdevs_discovered": 2, 00:18:30.011 "num_base_bdevs_operational": 2, 00:18:30.011 "base_bdevs_list": [ 00:18:30.011 { 00:18:30.011 "name": "pt1", 00:18:30.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.011 "is_configured": true, 00:18:30.011 "data_offset": 256, 00:18:30.011 "data_size": 7936 00:18:30.011 }, 00:18:30.011 { 00:18:30.011 "name": "pt2", 00:18:30.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.011 "is_configured": true, 00:18:30.011 "data_offset": 256, 00:18:30.011 "data_size": 7936 00:18:30.011 } 00:18:30.011 ] 00:18:30.011 }' 00:18:30.011 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.011 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:30.576 [2024-11-08 16:59:59.910741] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.576 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:30.576 "name": "raid_bdev1", 00:18:30.576 "aliases": [ 00:18:30.576 "bf68eff7-36ce-483a-84b0-7d75a21992bf" 00:18:30.576 ], 00:18:30.576 "product_name": "Raid Volume", 00:18:30.576 "block_size": 4128, 00:18:30.576 "num_blocks": 7936, 00:18:30.576 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:30.576 "md_size": 32, 00:18:30.576 "md_interleave": true, 00:18:30.576 "dif_type": 0, 00:18:30.576 "assigned_rate_limits": { 00:18:30.576 "rw_ios_per_sec": 0, 00:18:30.576 "rw_mbytes_per_sec": 0, 00:18:30.576 "r_mbytes_per_sec": 0, 00:18:30.576 "w_mbytes_per_sec": 0 00:18:30.576 }, 00:18:30.576 "claimed": false, 00:18:30.576 "zoned": false, 00:18:30.576 "supported_io_types": { 00:18:30.577 "read": true, 00:18:30.577 "write": true, 00:18:30.577 "unmap": false, 00:18:30.577 "flush": false, 00:18:30.577 "reset": true, 00:18:30.577 "nvme_admin": false, 00:18:30.577 "nvme_io": false, 00:18:30.577 "nvme_io_md": false, 00:18:30.577 "write_zeroes": true, 00:18:30.577 "zcopy": false, 00:18:30.577 "get_zone_info": false, 00:18:30.577 "zone_management": false, 00:18:30.577 "zone_append": false, 00:18:30.577 "compare": false, 00:18:30.577 "compare_and_write": false, 00:18:30.577 "abort": false, 00:18:30.577 "seek_hole": false, 00:18:30.577 "seek_data": false, 00:18:30.577 "copy": false, 00:18:30.577 "nvme_iov_md": false 00:18:30.577 }, 00:18:30.577 "memory_domains": [ 00:18:30.577 { 00:18:30.577 "dma_device_id": "system", 00:18:30.577 "dma_device_type": 1 00:18:30.577 }, 00:18:30.577 { 00:18:30.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.577 "dma_device_type": 2 00:18:30.577 }, 00:18:30.577 { 00:18:30.577 "dma_device_id": "system", 00:18:30.577 "dma_device_type": 1 00:18:30.577 }, 00:18:30.577 { 00:18:30.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.577 "dma_device_type": 2 00:18:30.577 } 00:18:30.577 ], 00:18:30.577 "driver_specific": { 00:18:30.577 "raid": { 00:18:30.577 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:30.577 "strip_size_kb": 0, 00:18:30.577 "state": "online", 00:18:30.577 "raid_level": "raid1", 00:18:30.577 "superblock": true, 00:18:30.577 "num_base_bdevs": 2, 00:18:30.577 "num_base_bdevs_discovered": 2, 00:18:30.577 "num_base_bdevs_operational": 2, 00:18:30.577 "base_bdevs_list": [ 00:18:30.577 { 00:18:30.577 "name": "pt1", 00:18:30.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.577 "is_configured": true, 00:18:30.577 "data_offset": 256, 00:18:30.577 "data_size": 7936 00:18:30.577 }, 00:18:30.577 { 00:18:30.577 "name": "pt2", 00:18:30.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.577 "is_configured": true, 00:18:30.577 "data_offset": 256, 00:18:30.577 "data_size": 7936 00:18:30.577 } 00:18:30.577 ] 00:18:30.577 } 00:18:30.577 } 00:18:30.577 }' 00:18:30.577 16:59:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:30.577 pt2' 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.577 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.836 [2024-11-08 17:00:00.154234] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bf68eff7-36ce-483a-84b0-7d75a21992bf 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z bf68eff7-36ce-483a-84b0-7d75a21992bf ']' 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.836 [2024-11-08 17:00:00.197829] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.836 [2024-11-08 17:00:00.197875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.836 [2024-11-08 17:00:00.197986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.836 [2024-11-08 17:00:00.198082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.836 [2024-11-08 17:00:00.198101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.836 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.837 [2024-11-08 17:00:00.337656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:30.837 [2024-11-08 17:00:00.340026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:30.837 [2024-11-08 17:00:00.340156] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:30.837 [2024-11-08 17:00:00.340266] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:30.837 [2024-11-08 17:00:00.340332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.837 [2024-11-08 17:00:00.340388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:18:30.837 request: 00:18:30.837 { 00:18:30.837 "name": "raid_bdev1", 00:18:30.837 "raid_level": "raid1", 00:18:30.837 "base_bdevs": [ 00:18:30.837 "malloc1", 00:18:30.837 "malloc2" 00:18:30.837 ], 00:18:30.837 "superblock": false, 00:18:30.837 "method": "bdev_raid_create", 00:18:30.837 "req_id": 1 00:18:30.837 } 00:18:30.837 Got JSON-RPC error response 00:18:30.837 response: 00:18:30.837 { 00:18:30.837 "code": -17, 00:18:30.837 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:30.837 } 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.837 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.096 [2024-11-08 17:00:00.401461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.096 [2024-11-08 17:00:00.401557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.096 [2024-11-08 17:00:00.401586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:31.096 [2024-11-08 17:00:00.401598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.096 [2024-11-08 17:00:00.403918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.096 [2024-11-08 17:00:00.403963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.096 [2024-11-08 17:00:00.404025] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.096 [2024-11-08 17:00:00.404089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.096 pt1 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.096 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.096 "name": "raid_bdev1", 00:18:31.096 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:31.096 "strip_size_kb": 0, 00:18:31.096 "state": "configuring", 00:18:31.096 "raid_level": "raid1", 00:18:31.096 "superblock": true, 00:18:31.096 "num_base_bdevs": 2, 00:18:31.096 "num_base_bdevs_discovered": 1, 00:18:31.096 "num_base_bdevs_operational": 2, 00:18:31.096 "base_bdevs_list": [ 00:18:31.096 { 00:18:31.096 "name": "pt1", 00:18:31.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.096 "is_configured": true, 00:18:31.096 "data_offset": 256, 00:18:31.096 "data_size": 7936 00:18:31.096 }, 00:18:31.096 { 00:18:31.096 "name": null, 00:18:31.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.096 "is_configured": false, 00:18:31.096 "data_offset": 256, 00:18:31.096 "data_size": 7936 00:18:31.097 } 00:18:31.097 ] 00:18:31.097 }' 00:18:31.097 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.097 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.673 [2024-11-08 17:00:00.908651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.673 [2024-11-08 17:00:00.908827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.673 [2024-11-08 17:00:00.908894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:31.673 [2024-11-08 17:00:00.908934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.673 [2024-11-08 17:00:00.909142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.673 [2024-11-08 17:00:00.909193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.673 [2024-11-08 17:00:00.909296] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:31.673 [2024-11-08 17:00:00.909357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.673 [2024-11-08 17:00:00.909485] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:18:31.673 [2024-11-08 17:00:00.909529] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:31.673 [2024-11-08 17:00:00.909676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:31.673 [2024-11-08 17:00:00.909776] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:18:31.673 [2024-11-08 17:00:00.909825] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:18:31.673 [2024-11-08 17:00:00.909936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.673 pt2 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.673 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.673 "name": "raid_bdev1", 00:18:31.673 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:31.673 "strip_size_kb": 0, 00:18:31.673 "state": "online", 00:18:31.674 "raid_level": "raid1", 00:18:31.674 "superblock": true, 00:18:31.674 "num_base_bdevs": 2, 00:18:31.674 "num_base_bdevs_discovered": 2, 00:18:31.674 "num_base_bdevs_operational": 2, 00:18:31.674 "base_bdevs_list": [ 00:18:31.674 { 00:18:31.674 "name": "pt1", 00:18:31.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.674 "is_configured": true, 00:18:31.674 "data_offset": 256, 00:18:31.674 "data_size": 7936 00:18:31.674 }, 00:18:31.674 { 00:18:31.674 "name": "pt2", 00:18:31.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.674 "is_configured": true, 00:18:31.674 "data_offset": 256, 00:18:31.674 "data_size": 7936 00:18:31.674 } 00:18:31.674 ] 00:18:31.674 }' 00:18:31.674 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.674 17:00:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.933 [2024-11-08 17:00:01.412160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:31.933 "name": "raid_bdev1", 00:18:31.933 "aliases": [ 00:18:31.933 "bf68eff7-36ce-483a-84b0-7d75a21992bf" 00:18:31.933 ], 00:18:31.933 "product_name": "Raid Volume", 00:18:31.933 "block_size": 4128, 00:18:31.933 "num_blocks": 7936, 00:18:31.933 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:31.933 "md_size": 32, 00:18:31.933 "md_interleave": true, 00:18:31.933 "dif_type": 0, 00:18:31.933 "assigned_rate_limits": { 00:18:31.933 "rw_ios_per_sec": 0, 00:18:31.933 "rw_mbytes_per_sec": 0, 00:18:31.933 "r_mbytes_per_sec": 0, 00:18:31.933 "w_mbytes_per_sec": 0 00:18:31.933 }, 00:18:31.933 "claimed": false, 00:18:31.933 "zoned": false, 00:18:31.933 "supported_io_types": { 00:18:31.933 "read": true, 00:18:31.933 "write": true, 00:18:31.933 "unmap": false, 00:18:31.933 "flush": false, 00:18:31.933 "reset": true, 00:18:31.933 "nvme_admin": false, 00:18:31.933 "nvme_io": false, 00:18:31.933 "nvme_io_md": false, 00:18:31.933 "write_zeroes": true, 00:18:31.933 "zcopy": false, 00:18:31.933 "get_zone_info": false, 00:18:31.933 "zone_management": false, 00:18:31.933 "zone_append": false, 00:18:31.933 "compare": false, 00:18:31.933 "compare_and_write": false, 00:18:31.933 "abort": false, 00:18:31.933 "seek_hole": false, 00:18:31.933 "seek_data": false, 00:18:31.933 "copy": false, 00:18:31.933 "nvme_iov_md": false 00:18:31.933 }, 00:18:31.933 "memory_domains": [ 00:18:31.933 { 00:18:31.933 "dma_device_id": "system", 00:18:31.933 "dma_device_type": 1 00:18:31.933 }, 00:18:31.933 { 00:18:31.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.933 "dma_device_type": 2 00:18:31.933 }, 00:18:31.933 { 00:18:31.933 "dma_device_id": "system", 00:18:31.933 "dma_device_type": 1 00:18:31.933 }, 00:18:31.933 { 00:18:31.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.933 "dma_device_type": 2 00:18:31.933 } 00:18:31.933 ], 00:18:31.933 "driver_specific": { 00:18:31.933 "raid": { 00:18:31.933 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:31.933 "strip_size_kb": 0, 00:18:31.933 "state": "online", 00:18:31.933 "raid_level": "raid1", 00:18:31.933 "superblock": true, 00:18:31.933 "num_base_bdevs": 2, 00:18:31.933 "num_base_bdevs_discovered": 2, 00:18:31.933 "num_base_bdevs_operational": 2, 00:18:31.933 "base_bdevs_list": [ 00:18:31.933 { 00:18:31.933 "name": "pt1", 00:18:31.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.933 "is_configured": true, 00:18:31.933 "data_offset": 256, 00:18:31.933 "data_size": 7936 00:18:31.933 }, 00:18:31.933 { 00:18:31.933 "name": "pt2", 00:18:31.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.933 "is_configured": true, 00:18:31.933 "data_offset": 256, 00:18:31.933 "data_size": 7936 00:18:31.933 } 00:18:31.933 ] 00:18:31.933 } 00:18:31.933 } 00:18:31.933 }' 00:18:31.933 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:32.193 pt2' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.193 [2024-11-08 17:00:01.659891] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' bf68eff7-36ce-483a-84b0-7d75a21992bf '!=' bf68eff7-36ce-483a-84b0-7d75a21992bf ']' 00:18:32.193 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.194 [2024-11-08 17:00:01.707534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.194 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.452 "name": "raid_bdev1", 00:18:32.452 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:32.452 "strip_size_kb": 0, 00:18:32.452 "state": "online", 00:18:32.452 "raid_level": "raid1", 00:18:32.452 "superblock": true, 00:18:32.452 "num_base_bdevs": 2, 00:18:32.452 "num_base_bdevs_discovered": 1, 00:18:32.452 "num_base_bdevs_operational": 1, 00:18:32.452 "base_bdevs_list": [ 00:18:32.452 { 00:18:32.452 "name": null, 00:18:32.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.452 "is_configured": false, 00:18:32.452 "data_offset": 0, 00:18:32.452 "data_size": 7936 00:18:32.452 }, 00:18:32.452 { 00:18:32.452 "name": "pt2", 00:18:32.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.452 "is_configured": true, 00:18:32.452 "data_offset": 256, 00:18:32.452 "data_size": 7936 00:18:32.452 } 00:18:32.452 ] 00:18:32.452 }' 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.452 17:00:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.711 [2024-11-08 17:00:02.222735] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.711 [2024-11-08 17:00:02.222779] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.711 [2024-11-08 17:00:02.222881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.711 [2024-11-08 17:00:02.222942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.711 [2024-11-08 17:00:02.222954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.711 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.970 [2024-11-08 17:00:02.298587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.970 [2024-11-08 17:00:02.298768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.970 [2024-11-08 17:00:02.298816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:32.970 [2024-11-08 17:00:02.298872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.970 [2024-11-08 17:00:02.301257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.970 [2024-11-08 17:00:02.301350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.970 [2024-11-08 17:00:02.301457] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:32.970 [2024-11-08 17:00:02.301544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.970 [2024-11-08 17:00:02.301669] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:18:32.970 [2024-11-08 17:00:02.301764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.970 [2024-11-08 17:00:02.301898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:32.970 [2024-11-08 17:00:02.301974] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:18:32.970 [2024-11-08 17:00:02.301986] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:18:32.970 [2024-11-08 17:00:02.302057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.970 pt2 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.970 "name": "raid_bdev1", 00:18:32.970 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:32.970 "strip_size_kb": 0, 00:18:32.970 "state": "online", 00:18:32.970 "raid_level": "raid1", 00:18:32.970 "superblock": true, 00:18:32.970 "num_base_bdevs": 2, 00:18:32.970 "num_base_bdevs_discovered": 1, 00:18:32.970 "num_base_bdevs_operational": 1, 00:18:32.970 "base_bdevs_list": [ 00:18:32.970 { 00:18:32.970 "name": null, 00:18:32.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.970 "is_configured": false, 00:18:32.970 "data_offset": 256, 00:18:32.970 "data_size": 7936 00:18:32.970 }, 00:18:32.970 { 00:18:32.970 "name": "pt2", 00:18:32.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.970 "is_configured": true, 00:18:32.970 "data_offset": 256, 00:18:32.970 "data_size": 7936 00:18:32.970 } 00:18:32.970 ] 00:18:32.970 }' 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.970 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.538 [2024-11-08 17:00:02.817755] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.538 [2024-11-08 17:00:02.817877] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.538 [2024-11-08 17:00:02.817996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.538 [2024-11-08 17:00:02.818098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.538 [2024-11-08 17:00:02.818157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.538 [2024-11-08 17:00:02.881620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.538 [2024-11-08 17:00:02.881774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.538 [2024-11-08 17:00:02.881828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:33.538 [2024-11-08 17:00:02.881876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.538 [2024-11-08 17:00:02.884211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.538 [2024-11-08 17:00:02.884302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.538 [2024-11-08 17:00:02.884399] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:33.538 [2024-11-08 17:00:02.884486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.538 [2024-11-08 17:00:02.884649] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:33.538 [2024-11-08 17:00:02.884719] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.538 [2024-11-08 17:00:02.884775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:18:33.538 [2024-11-08 17:00:02.884863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.538 [2024-11-08 17:00:02.884997] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:18:33.538 [2024-11-08 17:00:02.885049] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:33.538 [2024-11-08 17:00:02.885152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:33.538 [2024-11-08 17:00:02.885263] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:18:33.538 [2024-11-08 17:00:02.885310] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:18:33.538 [2024-11-08 17:00:02.885442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.538 pt1 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.538 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.539 "name": "raid_bdev1", 00:18:33.539 "uuid": "bf68eff7-36ce-483a-84b0-7d75a21992bf", 00:18:33.539 "strip_size_kb": 0, 00:18:33.539 "state": "online", 00:18:33.539 "raid_level": "raid1", 00:18:33.539 "superblock": true, 00:18:33.539 "num_base_bdevs": 2, 00:18:33.539 "num_base_bdevs_discovered": 1, 00:18:33.539 "num_base_bdevs_operational": 1, 00:18:33.539 "base_bdevs_list": [ 00:18:33.539 { 00:18:33.539 "name": null, 00:18:33.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.539 "is_configured": false, 00:18:33.539 "data_offset": 256, 00:18:33.539 "data_size": 7936 00:18:33.539 }, 00:18:33.539 { 00:18:33.539 "name": "pt2", 00:18:33.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.539 "is_configured": true, 00:18:33.539 "data_offset": 256, 00:18:33.539 "data_size": 7936 00:18:33.539 } 00:18:33.539 ] 00:18:33.539 }' 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.539 17:00:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:34.106 [2024-11-08 17:00:03.449129] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' bf68eff7-36ce-483a-84b0-7d75a21992bf '!=' bf68eff7-36ce-483a-84b0-7d75a21992bf ']' 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99151 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99151 ']' 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99151 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99151 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99151' 00:18:34.106 killing process with pid 99151 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99151 00:18:34.106 [2024-11-08 17:00:03.518223] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.106 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99151 00:18:34.106 [2024-11-08 17:00:03.518412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.106 [2024-11-08 17:00:03.518472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.106 [2024-11-08 17:00:03.518483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:18:34.106 [2024-11-08 17:00:03.544000] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.365 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:34.365 00:18:34.365 real 0m5.452s 00:18:34.365 user 0m8.907s 00:18:34.365 sys 0m1.221s 00:18:34.365 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.365 17:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.365 ************************************ 00:18:34.365 END TEST raid_superblock_test_md_interleaved 00:18:34.365 ************************************ 00:18:34.365 17:00:03 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:34.365 17:00:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:34.365 17:00:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.365 17:00:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.624 ************************************ 00:18:34.624 START TEST raid_rebuild_test_sb_md_interleaved 00:18:34.624 ************************************ 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99468 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99468 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99468 ']' 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.624 17:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:34.624 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:34.624 Zero copy mechanism will not be used. 00:18:34.624 [2024-11-08 17:00:03.993957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:34.624 [2024-11-08 17:00:03.994107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99468 ] 00:18:34.882 [2024-11-08 17:00:04.160536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.882 [2024-11-08 17:00:04.214654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.882 [2024-11-08 17:00:04.260459] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.882 [2024-11-08 17:00:04.260599] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 BaseBdev1_malloc 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 [2024-11-08 17:00:04.893036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.478 [2024-11-08 17:00:04.893217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.478 [2024-11-08 17:00:04.893254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.478 [2024-11-08 17:00:04.893265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.478 [2024-11-08 17:00:04.895347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.478 [2024-11-08 17:00:04.895389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.478 BaseBdev1 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 BaseBdev2_malloc 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 [2024-11-08 17:00:04.921971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:35.478 [2024-11-08 17:00:04.922053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.478 [2024-11-08 17:00:04.922080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.478 [2024-11-08 17:00:04.922093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.478 [2024-11-08 17:00:04.924492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.478 [2024-11-08 17:00:04.924537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:35.478 BaseBdev2 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 spare_malloc 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 spare_delay 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 [2024-11-08 17:00:04.954982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.478 [2024-11-08 17:00:04.955061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.478 [2024-11-08 17:00:04.955089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:35.478 [2024-11-08 17:00:04.955099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.478 [2024-11-08 17:00:04.957273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.478 [2024-11-08 17:00:04.957406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.478 spare 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 [2024-11-08 17:00:04.962999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.478 [2024-11-08 17:00:04.965084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.478 [2024-11-08 17:00:04.965283] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:35.478 [2024-11-08 17:00:04.965297] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:35.478 [2024-11-08 17:00:04.965405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:35.478 [2024-11-08 17:00:04.965486] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:35.478 [2024-11-08 17:00:04.965501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:35.478 [2024-11-08 17:00:04.965579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.478 17:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.739 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.739 "name": "raid_bdev1", 00:18:35.739 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:35.739 "strip_size_kb": 0, 00:18:35.739 "state": "online", 00:18:35.739 "raid_level": "raid1", 00:18:35.739 "superblock": true, 00:18:35.739 "num_base_bdevs": 2, 00:18:35.739 "num_base_bdevs_discovered": 2, 00:18:35.739 "num_base_bdevs_operational": 2, 00:18:35.739 "base_bdevs_list": [ 00:18:35.739 { 00:18:35.739 "name": "BaseBdev1", 00:18:35.739 "uuid": "d821aaa0-090d-5f5d-83d2-f7e66a7ba6e2", 00:18:35.739 "is_configured": true, 00:18:35.739 "data_offset": 256, 00:18:35.739 "data_size": 7936 00:18:35.739 }, 00:18:35.739 { 00:18:35.739 "name": "BaseBdev2", 00:18:35.739 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:35.739 "is_configured": true, 00:18:35.739 "data_offset": 256, 00:18:35.739 "data_size": 7936 00:18:35.739 } 00:18:35.739 ] 00:18:35.739 }' 00:18:35.739 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.739 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.999 [2024-11-08 17:00:05.390601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.999 [2024-11-08 17:00:05.490090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.999 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.258 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.258 "name": "raid_bdev1", 00:18:36.258 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:36.258 "strip_size_kb": 0, 00:18:36.258 "state": "online", 00:18:36.258 "raid_level": "raid1", 00:18:36.258 "superblock": true, 00:18:36.258 "num_base_bdevs": 2, 00:18:36.258 "num_base_bdevs_discovered": 1, 00:18:36.258 "num_base_bdevs_operational": 1, 00:18:36.258 "base_bdevs_list": [ 00:18:36.258 { 00:18:36.258 "name": null, 00:18:36.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.258 "is_configured": false, 00:18:36.258 "data_offset": 0, 00:18:36.258 "data_size": 7936 00:18:36.259 }, 00:18:36.259 { 00:18:36.259 "name": "BaseBdev2", 00:18:36.259 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:36.259 "is_configured": true, 00:18:36.259 "data_offset": 256, 00:18:36.259 "data_size": 7936 00:18:36.259 } 00:18:36.259 ] 00:18:36.259 }' 00:18:36.259 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.259 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.517 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.517 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.517 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.517 [2024-11-08 17:00:05.921371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.517 [2024-11-08 17:00:05.924291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:36.517 [2024-11-08 17:00:05.926175] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.517 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.517 17:00:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.453 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.712 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.712 "name": "raid_bdev1", 00:18:37.712 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:37.712 "strip_size_kb": 0, 00:18:37.712 "state": "online", 00:18:37.712 "raid_level": "raid1", 00:18:37.712 "superblock": true, 00:18:37.712 "num_base_bdevs": 2, 00:18:37.712 "num_base_bdevs_discovered": 2, 00:18:37.712 "num_base_bdevs_operational": 2, 00:18:37.712 "process": { 00:18:37.712 "type": "rebuild", 00:18:37.712 "target": "spare", 00:18:37.712 "progress": { 00:18:37.712 "blocks": 2560, 00:18:37.712 "percent": 32 00:18:37.712 } 00:18:37.712 }, 00:18:37.712 "base_bdevs_list": [ 00:18:37.712 { 00:18:37.712 "name": "spare", 00:18:37.712 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:37.712 "is_configured": true, 00:18:37.712 "data_offset": 256, 00:18:37.712 "data_size": 7936 00:18:37.712 }, 00:18:37.712 { 00:18:37.712 "name": "BaseBdev2", 00:18:37.712 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:37.712 "is_configured": true, 00:18:37.712 "data_offset": 256, 00:18:37.712 "data_size": 7936 00:18:37.712 } 00:18:37.712 ] 00:18:37.712 }' 00:18:37.712 17:00:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.712 [2024-11-08 17:00:07.077425] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.712 [2024-11-08 17:00:07.132469] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.712 [2024-11-08 17:00:07.132670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.712 [2024-11-08 17:00:07.132716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.712 [2024-11-08 17:00:07.132740] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.712 "name": "raid_bdev1", 00:18:37.712 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:37.712 "strip_size_kb": 0, 00:18:37.712 "state": "online", 00:18:37.712 "raid_level": "raid1", 00:18:37.712 "superblock": true, 00:18:37.712 "num_base_bdevs": 2, 00:18:37.712 "num_base_bdevs_discovered": 1, 00:18:37.712 "num_base_bdevs_operational": 1, 00:18:37.712 "base_bdevs_list": [ 00:18:37.712 { 00:18:37.712 "name": null, 00:18:37.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.712 "is_configured": false, 00:18:37.712 "data_offset": 0, 00:18:37.712 "data_size": 7936 00:18:37.712 }, 00:18:37.712 { 00:18:37.712 "name": "BaseBdev2", 00:18:37.712 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:37.712 "is_configured": true, 00:18:37.712 "data_offset": 256, 00:18:37.712 "data_size": 7936 00:18:37.712 } 00:18:37.712 ] 00:18:37.712 }' 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.712 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.280 "name": "raid_bdev1", 00:18:38.280 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:38.280 "strip_size_kb": 0, 00:18:38.280 "state": "online", 00:18:38.280 "raid_level": "raid1", 00:18:38.280 "superblock": true, 00:18:38.280 "num_base_bdevs": 2, 00:18:38.280 "num_base_bdevs_discovered": 1, 00:18:38.280 "num_base_bdevs_operational": 1, 00:18:38.280 "base_bdevs_list": [ 00:18:38.280 { 00:18:38.280 "name": null, 00:18:38.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.280 "is_configured": false, 00:18:38.280 "data_offset": 0, 00:18:38.280 "data_size": 7936 00:18:38.280 }, 00:18:38.280 { 00:18:38.280 "name": "BaseBdev2", 00:18:38.280 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:38.280 "is_configured": true, 00:18:38.280 "data_offset": 256, 00:18:38.280 "data_size": 7936 00:18:38.280 } 00:18:38.280 ] 00:18:38.280 }' 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.280 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.280 [2024-11-08 17:00:07.763879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.281 [2024-11-08 17:00:07.767371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:38.281 [2024-11-08 17:00:07.769576] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.281 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.281 17:00:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:39.660 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.660 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.660 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.661 "name": "raid_bdev1", 00:18:39.661 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:39.661 "strip_size_kb": 0, 00:18:39.661 "state": "online", 00:18:39.661 "raid_level": "raid1", 00:18:39.661 "superblock": true, 00:18:39.661 "num_base_bdevs": 2, 00:18:39.661 "num_base_bdevs_discovered": 2, 00:18:39.661 "num_base_bdevs_operational": 2, 00:18:39.661 "process": { 00:18:39.661 "type": "rebuild", 00:18:39.661 "target": "spare", 00:18:39.661 "progress": { 00:18:39.661 "blocks": 2560, 00:18:39.661 "percent": 32 00:18:39.661 } 00:18:39.661 }, 00:18:39.661 "base_bdevs_list": [ 00:18:39.661 { 00:18:39.661 "name": "spare", 00:18:39.661 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:39.661 "is_configured": true, 00:18:39.661 "data_offset": 256, 00:18:39.661 "data_size": 7936 00:18:39.661 }, 00:18:39.661 { 00:18:39.661 "name": "BaseBdev2", 00:18:39.661 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:39.661 "is_configured": true, 00:18:39.661 "data_offset": 256, 00:18:39.661 "data_size": 7936 00:18:39.661 } 00:18:39.661 ] 00:18:39.661 }' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:39.661 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=633 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.661 "name": "raid_bdev1", 00:18:39.661 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:39.661 "strip_size_kb": 0, 00:18:39.661 "state": "online", 00:18:39.661 "raid_level": "raid1", 00:18:39.661 "superblock": true, 00:18:39.661 "num_base_bdevs": 2, 00:18:39.661 "num_base_bdevs_discovered": 2, 00:18:39.661 "num_base_bdevs_operational": 2, 00:18:39.661 "process": { 00:18:39.661 "type": "rebuild", 00:18:39.661 "target": "spare", 00:18:39.661 "progress": { 00:18:39.661 "blocks": 2816, 00:18:39.661 "percent": 35 00:18:39.661 } 00:18:39.661 }, 00:18:39.661 "base_bdevs_list": [ 00:18:39.661 { 00:18:39.661 "name": "spare", 00:18:39.661 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:39.661 "is_configured": true, 00:18:39.661 "data_offset": 256, 00:18:39.661 "data_size": 7936 00:18:39.661 }, 00:18:39.661 { 00:18:39.661 "name": "BaseBdev2", 00:18:39.661 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:39.661 "is_configured": true, 00:18:39.661 "data_offset": 256, 00:18:39.661 "data_size": 7936 00:18:39.661 } 00:18:39.661 ] 00:18:39.661 }' 00:18:39.661 17:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.661 17:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.661 17:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.661 17:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.661 17:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.598 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.856 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.856 "name": "raid_bdev1", 00:18:40.856 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:40.856 "strip_size_kb": 0, 00:18:40.856 "state": "online", 00:18:40.856 "raid_level": "raid1", 00:18:40.856 "superblock": true, 00:18:40.856 "num_base_bdevs": 2, 00:18:40.856 "num_base_bdevs_discovered": 2, 00:18:40.856 "num_base_bdevs_operational": 2, 00:18:40.856 "process": { 00:18:40.856 "type": "rebuild", 00:18:40.856 "target": "spare", 00:18:40.856 "progress": { 00:18:40.856 "blocks": 5888, 00:18:40.856 "percent": 74 00:18:40.856 } 00:18:40.856 }, 00:18:40.856 "base_bdevs_list": [ 00:18:40.856 { 00:18:40.856 "name": "spare", 00:18:40.856 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:40.856 "is_configured": true, 00:18:40.856 "data_offset": 256, 00:18:40.856 "data_size": 7936 00:18:40.856 }, 00:18:40.856 { 00:18:40.856 "name": "BaseBdev2", 00:18:40.856 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:40.856 "is_configured": true, 00:18:40.856 "data_offset": 256, 00:18:40.856 "data_size": 7936 00:18:40.856 } 00:18:40.856 ] 00:18:40.856 }' 00:18:40.856 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.856 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.856 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.856 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.856 17:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.424 [2024-11-08 17:00:10.884051] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:41.424 [2024-11-08 17:00:10.884289] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:41.424 [2024-11-08 17:00:10.884480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.994 "name": "raid_bdev1", 00:18:41.994 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:41.994 "strip_size_kb": 0, 00:18:41.994 "state": "online", 00:18:41.994 "raid_level": "raid1", 00:18:41.994 "superblock": true, 00:18:41.994 "num_base_bdevs": 2, 00:18:41.994 "num_base_bdevs_discovered": 2, 00:18:41.994 "num_base_bdevs_operational": 2, 00:18:41.994 "base_bdevs_list": [ 00:18:41.994 { 00:18:41.994 "name": "spare", 00:18:41.994 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:41.994 "is_configured": true, 00:18:41.994 "data_offset": 256, 00:18:41.994 "data_size": 7936 00:18:41.994 }, 00:18:41.994 { 00:18:41.994 "name": "BaseBdev2", 00:18:41.994 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:41.994 "is_configured": true, 00:18:41.994 "data_offset": 256, 00:18:41.994 "data_size": 7936 00:18:41.994 } 00:18:41.994 ] 00:18:41.994 }' 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.994 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.994 "name": "raid_bdev1", 00:18:41.994 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:41.994 "strip_size_kb": 0, 00:18:41.994 "state": "online", 00:18:41.994 "raid_level": "raid1", 00:18:41.994 "superblock": true, 00:18:41.994 "num_base_bdevs": 2, 00:18:41.994 "num_base_bdevs_discovered": 2, 00:18:41.994 "num_base_bdevs_operational": 2, 00:18:41.994 "base_bdevs_list": [ 00:18:41.994 { 00:18:41.994 "name": "spare", 00:18:41.994 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:41.994 "is_configured": true, 00:18:41.994 "data_offset": 256, 00:18:41.994 "data_size": 7936 00:18:41.994 }, 00:18:41.994 { 00:18:41.994 "name": "BaseBdev2", 00:18:41.994 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:41.994 "is_configured": true, 00:18:41.995 "data_offset": 256, 00:18:41.995 "data_size": 7936 00:18:41.995 } 00:18:41.995 ] 00:18:41.995 }' 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.995 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.253 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.253 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.253 "name": "raid_bdev1", 00:18:42.253 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:42.253 "strip_size_kb": 0, 00:18:42.253 "state": "online", 00:18:42.253 "raid_level": "raid1", 00:18:42.253 "superblock": true, 00:18:42.253 "num_base_bdevs": 2, 00:18:42.253 "num_base_bdevs_discovered": 2, 00:18:42.253 "num_base_bdevs_operational": 2, 00:18:42.254 "base_bdevs_list": [ 00:18:42.254 { 00:18:42.254 "name": "spare", 00:18:42.254 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:42.254 "is_configured": true, 00:18:42.254 "data_offset": 256, 00:18:42.254 "data_size": 7936 00:18:42.254 }, 00:18:42.254 { 00:18:42.254 "name": "BaseBdev2", 00:18:42.254 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:42.254 "is_configured": true, 00:18:42.254 "data_offset": 256, 00:18:42.254 "data_size": 7936 00:18:42.254 } 00:18:42.254 ] 00:18:42.254 }' 00:18:42.254 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.254 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.512 [2024-11-08 17:00:11.982866] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.512 [2024-11-08 17:00:11.982990] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.512 [2024-11-08 17:00:11.983122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.512 [2024-11-08 17:00:11.983245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.512 [2024-11-08 17:00:11.983337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.512 17:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.512 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.772 [2024-11-08 17:00:12.062736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:42.772 [2024-11-08 17:00:12.062820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.772 [2024-11-08 17:00:12.062842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:42.772 [2024-11-08 17:00:12.062853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.772 [2024-11-08 17:00:12.064909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.772 [2024-11-08 17:00:12.064955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:42.772 [2024-11-08 17:00:12.065024] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:42.772 [2024-11-08 17:00:12.065072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.772 [2024-11-08 17:00:12.065182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.772 spare 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.772 [2024-11-08 17:00:12.165087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:42.772 [2024-11-08 17:00:12.165131] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:42.772 [2024-11-08 17:00:12.165286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:42.772 [2024-11-08 17:00:12.165402] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:42.772 [2024-11-08 17:00:12.165422] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:42.772 [2024-11-08 17:00:12.165519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.772 "name": "raid_bdev1", 00:18:42.772 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:42.772 "strip_size_kb": 0, 00:18:42.772 "state": "online", 00:18:42.772 "raid_level": "raid1", 00:18:42.772 "superblock": true, 00:18:42.772 "num_base_bdevs": 2, 00:18:42.772 "num_base_bdevs_discovered": 2, 00:18:42.772 "num_base_bdevs_operational": 2, 00:18:42.772 "base_bdevs_list": [ 00:18:42.772 { 00:18:42.772 "name": "spare", 00:18:42.772 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:42.772 "is_configured": true, 00:18:42.772 "data_offset": 256, 00:18:42.772 "data_size": 7936 00:18:42.772 }, 00:18:42.772 { 00:18:42.772 "name": "BaseBdev2", 00:18:42.772 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:42.772 "is_configured": true, 00:18:42.772 "data_offset": 256, 00:18:42.772 "data_size": 7936 00:18:42.772 } 00:18:42.772 ] 00:18:42.772 }' 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.772 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.032 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.290 "name": "raid_bdev1", 00:18:43.290 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:43.290 "strip_size_kb": 0, 00:18:43.290 "state": "online", 00:18:43.290 "raid_level": "raid1", 00:18:43.290 "superblock": true, 00:18:43.290 "num_base_bdevs": 2, 00:18:43.290 "num_base_bdevs_discovered": 2, 00:18:43.290 "num_base_bdevs_operational": 2, 00:18:43.290 "base_bdevs_list": [ 00:18:43.290 { 00:18:43.290 "name": "spare", 00:18:43.290 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:43.290 "is_configured": true, 00:18:43.290 "data_offset": 256, 00:18:43.290 "data_size": 7936 00:18:43.290 }, 00:18:43.290 { 00:18:43.290 "name": "BaseBdev2", 00:18:43.290 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:43.290 "is_configured": true, 00:18:43.290 "data_offset": 256, 00:18:43.290 "data_size": 7936 00:18:43.290 } 00:18:43.290 ] 00:18:43.290 }' 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.290 [2024-11-08 17:00:12.697706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.290 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.290 "name": "raid_bdev1", 00:18:43.290 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:43.290 "strip_size_kb": 0, 00:18:43.290 "state": "online", 00:18:43.290 "raid_level": "raid1", 00:18:43.290 "superblock": true, 00:18:43.290 "num_base_bdevs": 2, 00:18:43.290 "num_base_bdevs_discovered": 1, 00:18:43.290 "num_base_bdevs_operational": 1, 00:18:43.290 "base_bdevs_list": [ 00:18:43.290 { 00:18:43.290 "name": null, 00:18:43.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.290 "is_configured": false, 00:18:43.291 "data_offset": 0, 00:18:43.291 "data_size": 7936 00:18:43.291 }, 00:18:43.291 { 00:18:43.291 "name": "BaseBdev2", 00:18:43.291 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:43.291 "is_configured": true, 00:18:43.291 "data_offset": 256, 00:18:43.291 "data_size": 7936 00:18:43.291 } 00:18:43.291 ] 00:18:43.291 }' 00:18:43.291 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.291 17:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.856 17:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.856 17:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.856 17:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.856 [2024-11-08 17:00:13.085085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.856 [2024-11-08 17:00:13.085301] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.856 [2024-11-08 17:00:13.085319] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:43.856 [2024-11-08 17:00:13.085362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.856 [2024-11-08 17:00:13.088586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:43.856 [2024-11-08 17:00:13.090947] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.856 17:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.856 17:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.792 "name": "raid_bdev1", 00:18:44.792 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:44.792 "strip_size_kb": 0, 00:18:44.792 "state": "online", 00:18:44.792 "raid_level": "raid1", 00:18:44.792 "superblock": true, 00:18:44.792 "num_base_bdevs": 2, 00:18:44.792 "num_base_bdevs_discovered": 2, 00:18:44.792 "num_base_bdevs_operational": 2, 00:18:44.792 "process": { 00:18:44.792 "type": "rebuild", 00:18:44.792 "target": "spare", 00:18:44.792 "progress": { 00:18:44.792 "blocks": 2560, 00:18:44.792 "percent": 32 00:18:44.792 } 00:18:44.792 }, 00:18:44.792 "base_bdevs_list": [ 00:18:44.792 { 00:18:44.792 "name": "spare", 00:18:44.792 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:44.792 "is_configured": true, 00:18:44.792 "data_offset": 256, 00:18:44.792 "data_size": 7936 00:18:44.792 }, 00:18:44.792 { 00:18:44.792 "name": "BaseBdev2", 00:18:44.792 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:44.792 "is_configured": true, 00:18:44.792 "data_offset": 256, 00:18:44.792 "data_size": 7936 00:18:44.792 } 00:18:44.792 ] 00:18:44.792 }' 00:18:44.792 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.793 [2024-11-08 17:00:14.254447] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.793 [2024-11-08 17:00:14.296622] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:44.793 [2024-11-08 17:00:14.296716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.793 [2024-11-08 17:00:14.296735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.793 [2024-11-08 17:00:14.296744] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.793 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.052 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.052 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.052 "name": "raid_bdev1", 00:18:45.052 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:45.052 "strip_size_kb": 0, 00:18:45.052 "state": "online", 00:18:45.052 "raid_level": "raid1", 00:18:45.052 "superblock": true, 00:18:45.052 "num_base_bdevs": 2, 00:18:45.052 "num_base_bdevs_discovered": 1, 00:18:45.052 "num_base_bdevs_operational": 1, 00:18:45.052 "base_bdevs_list": [ 00:18:45.052 { 00:18:45.052 "name": null, 00:18:45.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.052 "is_configured": false, 00:18:45.052 "data_offset": 0, 00:18:45.052 "data_size": 7936 00:18:45.052 }, 00:18:45.052 { 00:18:45.052 "name": "BaseBdev2", 00:18:45.052 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:45.052 "is_configured": true, 00:18:45.052 "data_offset": 256, 00:18:45.052 "data_size": 7936 00:18:45.052 } 00:18:45.052 ] 00:18:45.052 }' 00:18:45.052 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.052 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.312 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.312 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.312 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.312 [2024-11-08 17:00:14.711842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.312 [2024-11-08 17:00:14.712003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.312 [2024-11-08 17:00:14.712052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:45.312 [2024-11-08 17:00:14.712103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.312 [2024-11-08 17:00:14.712348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.312 [2024-11-08 17:00:14.712398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.312 [2024-11-08 17:00:14.712488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:45.312 [2024-11-08 17:00:14.712525] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.312 [2024-11-08 17:00:14.712569] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:45.312 [2024-11-08 17:00:14.712625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.312 [2024-11-08 17:00:14.715538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:45.312 [2024-11-08 17:00:14.717508] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.312 spare 00:18:45.312 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.312 17:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.251 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.515 "name": "raid_bdev1", 00:18:46.515 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:46.515 "strip_size_kb": 0, 00:18:46.515 "state": "online", 00:18:46.515 "raid_level": "raid1", 00:18:46.515 "superblock": true, 00:18:46.515 "num_base_bdevs": 2, 00:18:46.515 "num_base_bdevs_discovered": 2, 00:18:46.515 "num_base_bdevs_operational": 2, 00:18:46.515 "process": { 00:18:46.515 "type": "rebuild", 00:18:46.515 "target": "spare", 00:18:46.515 "progress": { 00:18:46.515 "blocks": 2560, 00:18:46.515 "percent": 32 00:18:46.515 } 00:18:46.515 }, 00:18:46.515 "base_bdevs_list": [ 00:18:46.515 { 00:18:46.515 "name": "spare", 00:18:46.515 "uuid": "07ab4df8-d684-5f80-bcf2-89296da54599", 00:18:46.515 "is_configured": true, 00:18:46.515 "data_offset": 256, 00:18:46.515 "data_size": 7936 00:18:46.515 }, 00:18:46.515 { 00:18:46.515 "name": "BaseBdev2", 00:18:46.515 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:46.515 "is_configured": true, 00:18:46.515 "data_offset": 256, 00:18:46.515 "data_size": 7936 00:18:46.515 } 00:18:46.515 ] 00:18:46.515 }' 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 [2024-11-08 17:00:15.884658] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.515 [2024-11-08 17:00:15.922638] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:46.515 [2024-11-08 17:00:15.922729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.515 [2024-11-08 17:00:15.922745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.515 [2024-11-08 17:00:15.922756] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.515 "name": "raid_bdev1", 00:18:46.515 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:46.515 "strip_size_kb": 0, 00:18:46.515 "state": "online", 00:18:46.515 "raid_level": "raid1", 00:18:46.515 "superblock": true, 00:18:46.515 "num_base_bdevs": 2, 00:18:46.515 "num_base_bdevs_discovered": 1, 00:18:46.515 "num_base_bdevs_operational": 1, 00:18:46.515 "base_bdevs_list": [ 00:18:46.515 { 00:18:46.515 "name": null, 00:18:46.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.515 "is_configured": false, 00:18:46.515 "data_offset": 0, 00:18:46.515 "data_size": 7936 00:18:46.515 }, 00:18:46.515 { 00:18:46.515 "name": "BaseBdev2", 00:18:46.515 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:46.515 "is_configured": true, 00:18:46.515 "data_offset": 256, 00:18:46.515 "data_size": 7936 00:18:46.515 } 00:18:46.515 ] 00:18:46.515 }' 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.515 17:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.095 "name": "raid_bdev1", 00:18:47.095 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:47.095 "strip_size_kb": 0, 00:18:47.095 "state": "online", 00:18:47.095 "raid_level": "raid1", 00:18:47.095 "superblock": true, 00:18:47.095 "num_base_bdevs": 2, 00:18:47.095 "num_base_bdevs_discovered": 1, 00:18:47.095 "num_base_bdevs_operational": 1, 00:18:47.095 "base_bdevs_list": [ 00:18:47.095 { 00:18:47.095 "name": null, 00:18:47.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.095 "is_configured": false, 00:18:47.095 "data_offset": 0, 00:18:47.095 "data_size": 7936 00:18:47.095 }, 00:18:47.095 { 00:18:47.095 "name": "BaseBdev2", 00:18:47.095 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:47.095 "is_configured": true, 00:18:47.095 "data_offset": 256, 00:18:47.095 "data_size": 7936 00:18:47.095 } 00:18:47.095 ] 00:18:47.095 }' 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.095 [2024-11-08 17:00:16.581491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.095 [2024-11-08 17:00:16.581576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.095 [2024-11-08 17:00:16.581597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:47.095 [2024-11-08 17:00:16.581608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.095 [2024-11-08 17:00:16.581789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.095 [2024-11-08 17:00:16.581806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.095 [2024-11-08 17:00:16.581854] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:47.095 [2024-11-08 17:00:16.581882] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.095 [2024-11-08 17:00:16.581890] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:47.095 [2024-11-08 17:00:16.581905] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:47.095 BaseBdev1 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.095 17:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.472 "name": "raid_bdev1", 00:18:48.472 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:48.472 "strip_size_kb": 0, 00:18:48.472 "state": "online", 00:18:48.472 "raid_level": "raid1", 00:18:48.472 "superblock": true, 00:18:48.472 "num_base_bdevs": 2, 00:18:48.472 "num_base_bdevs_discovered": 1, 00:18:48.472 "num_base_bdevs_operational": 1, 00:18:48.472 "base_bdevs_list": [ 00:18:48.472 { 00:18:48.472 "name": null, 00:18:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.472 "is_configured": false, 00:18:48.472 "data_offset": 0, 00:18:48.472 "data_size": 7936 00:18:48.472 }, 00:18:48.472 { 00:18:48.472 "name": "BaseBdev2", 00:18:48.472 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:48.472 "is_configured": true, 00:18:48.472 "data_offset": 256, 00:18:48.472 "data_size": 7936 00:18:48.472 } 00:18:48.472 ] 00:18:48.472 }' 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.472 17:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.731 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.732 "name": "raid_bdev1", 00:18:48.732 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:48.732 "strip_size_kb": 0, 00:18:48.732 "state": "online", 00:18:48.732 "raid_level": "raid1", 00:18:48.732 "superblock": true, 00:18:48.732 "num_base_bdevs": 2, 00:18:48.732 "num_base_bdevs_discovered": 1, 00:18:48.732 "num_base_bdevs_operational": 1, 00:18:48.732 "base_bdevs_list": [ 00:18:48.732 { 00:18:48.732 "name": null, 00:18:48.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.732 "is_configured": false, 00:18:48.732 "data_offset": 0, 00:18:48.732 "data_size": 7936 00:18:48.732 }, 00:18:48.732 { 00:18:48.732 "name": "BaseBdev2", 00:18:48.732 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:48.732 "is_configured": true, 00:18:48.732 "data_offset": 256, 00:18:48.732 "data_size": 7936 00:18:48.732 } 00:18:48.732 ] 00:18:48.732 }' 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.732 [2024-11-08 17:00:18.190875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.732 [2024-11-08 17:00:18.191122] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.732 [2024-11-08 17:00:18.191185] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:48.732 request: 00:18:48.732 { 00:18:48.732 "base_bdev": "BaseBdev1", 00:18:48.732 "raid_bdev": "raid_bdev1", 00:18:48.732 "method": "bdev_raid_add_base_bdev", 00:18:48.732 "req_id": 1 00:18:48.732 } 00:18:48.732 Got JSON-RPC error response 00:18:48.732 response: 00:18:48.732 { 00:18:48.732 "code": -22, 00:18:48.732 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:48.732 } 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.732 17:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.110 "name": "raid_bdev1", 00:18:50.110 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:50.110 "strip_size_kb": 0, 00:18:50.110 "state": "online", 00:18:50.110 "raid_level": "raid1", 00:18:50.110 "superblock": true, 00:18:50.110 "num_base_bdevs": 2, 00:18:50.110 "num_base_bdevs_discovered": 1, 00:18:50.110 "num_base_bdevs_operational": 1, 00:18:50.110 "base_bdevs_list": [ 00:18:50.110 { 00:18:50.110 "name": null, 00:18:50.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.110 "is_configured": false, 00:18:50.110 "data_offset": 0, 00:18:50.110 "data_size": 7936 00:18:50.110 }, 00:18:50.110 { 00:18:50.110 "name": "BaseBdev2", 00:18:50.110 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:50.110 "is_configured": true, 00:18:50.110 "data_offset": 256, 00:18:50.110 "data_size": 7936 00:18:50.110 } 00:18:50.110 ] 00:18:50.110 }' 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.110 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.369 "name": "raid_bdev1", 00:18:50.369 "uuid": "dbbc1d24-dacc-4ac0-af1f-10a879fec4c9", 00:18:50.369 "strip_size_kb": 0, 00:18:50.369 "state": "online", 00:18:50.369 "raid_level": "raid1", 00:18:50.369 "superblock": true, 00:18:50.369 "num_base_bdevs": 2, 00:18:50.369 "num_base_bdevs_discovered": 1, 00:18:50.369 "num_base_bdevs_operational": 1, 00:18:50.369 "base_bdevs_list": [ 00:18:50.369 { 00:18:50.369 "name": null, 00:18:50.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.369 "is_configured": false, 00:18:50.369 "data_offset": 0, 00:18:50.369 "data_size": 7936 00:18:50.369 }, 00:18:50.369 { 00:18:50.369 "name": "BaseBdev2", 00:18:50.369 "uuid": "c98ae1cf-f3a5-5845-813e-e86d2955465f", 00:18:50.369 "is_configured": true, 00:18:50.369 "data_offset": 256, 00:18:50.369 "data_size": 7936 00:18:50.369 } 00:18:50.369 ] 00:18:50.369 }' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99468 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99468 ']' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99468 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99468 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99468' 00:18:50.369 killing process with pid 99468 00:18:50.369 Received shutdown signal, test time was about 60.000000 seconds 00:18:50.369 00:18:50.369 Latency(us) 00:18:50.369 [2024-11-08T17:00:19.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.369 [2024-11-08T17:00:19.897Z] =================================================================================================================== 00:18:50.369 [2024-11-08T17:00:19.897Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99468 00:18:50.369 [2024-11-08 17:00:19.869067] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.369 17:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99468 00:18:50.369 [2024-11-08 17:00:19.869228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.369 [2024-11-08 17:00:19.869288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.369 [2024-11-08 17:00:19.869298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:50.627 [2024-11-08 17:00:19.903806] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.627 17:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:50.627 00:18:50.627 real 0m16.241s 00:18:50.627 user 0m21.706s 00:18:50.627 sys 0m1.734s 00:18:50.627 17:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.627 17:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.627 ************************************ 00:18:50.627 END TEST raid_rebuild_test_sb_md_interleaved 00:18:50.627 ************************************ 00:18:50.886 17:00:20 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:50.886 17:00:20 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:50.886 17:00:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99468 ']' 00:18:50.886 17:00:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99468 00:18:50.886 17:00:20 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:50.886 ************************************ 00:18:50.886 END TEST bdev_raid 00:18:50.886 ************************************ 00:18:50.886 00:18:50.886 real 10m14.890s 00:18:50.886 user 14m37.532s 00:18:50.886 sys 1m50.634s 00:18:50.886 17:00:20 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.886 17:00:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.886 17:00:20 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:50.886 17:00:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:50.886 17:00:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.886 17:00:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.886 ************************************ 00:18:50.886 START TEST spdkcli_raid 00:18:50.886 ************************************ 00:18:50.886 17:00:20 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:50.886 * Looking for test storage... 00:18:50.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:50.886 17:00:20 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:50.886 17:00:20 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:50.886 17:00:20 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:51.145 17:00:20 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:51.145 17:00:20 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.146 17:00:20 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:51.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.146 --rc genhtml_branch_coverage=1 00:18:51.146 --rc genhtml_function_coverage=1 00:18:51.146 --rc genhtml_legend=1 00:18:51.146 --rc geninfo_all_blocks=1 00:18:51.146 --rc geninfo_unexecuted_blocks=1 00:18:51.146 00:18:51.146 ' 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:51.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.146 --rc genhtml_branch_coverage=1 00:18:51.146 --rc genhtml_function_coverage=1 00:18:51.146 --rc genhtml_legend=1 00:18:51.146 --rc geninfo_all_blocks=1 00:18:51.146 --rc geninfo_unexecuted_blocks=1 00:18:51.146 00:18:51.146 ' 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:51.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.146 --rc genhtml_branch_coverage=1 00:18:51.146 --rc genhtml_function_coverage=1 00:18:51.146 --rc genhtml_legend=1 00:18:51.146 --rc geninfo_all_blocks=1 00:18:51.146 --rc geninfo_unexecuted_blocks=1 00:18:51.146 00:18:51.146 ' 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:51.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.146 --rc genhtml_branch_coverage=1 00:18:51.146 --rc genhtml_function_coverage=1 00:18:51.146 --rc genhtml_legend=1 00:18:51.146 --rc geninfo_all_blocks=1 00:18:51.146 --rc geninfo_unexecuted_blocks=1 00:18:51.146 00:18:51.146 ' 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:51.146 17:00:20 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100137 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100137 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100137 ']' 00:18:51.146 17:00:20 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.146 17:00:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.146 [2024-11-08 17:00:20.632722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:51.146 [2024-11-08 17:00:20.632987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100137 ] 00:18:51.405 [2024-11-08 17:00:20.804414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.405 [2024-11-08 17:00:20.851336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.405 [2024-11-08 17:00:20.851454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.973 17:00:21 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.973 17:00:21 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:18:51.973 17:00:21 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:51.973 17:00:21 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.973 17:00:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.232 17:00:21 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:52.232 17:00:21 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.232 17:00:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.232 17:00:21 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:52.232 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:52.232 ' 00:18:53.609 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:53.609 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:53.868 17:00:23 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:53.868 17:00:23 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.868 17:00:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.868 17:00:23 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:53.868 17:00:23 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.868 17:00:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.868 17:00:23 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:53.868 ' 00:18:55.247 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:55.247 17:00:24 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:55.247 17:00:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.247 17:00:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.247 17:00:24 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:55.247 17:00:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.247 17:00:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.247 17:00:24 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:55.247 17:00:24 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:55.816 17:00:25 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:55.816 17:00:25 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:55.816 17:00:25 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:55.816 17:00:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.816 17:00:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.816 17:00:25 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:55.816 17:00:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.816 17:00:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.816 17:00:25 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:55.816 ' 00:18:56.754 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:57.012 17:00:26 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:57.012 17:00:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.012 17:00:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.012 17:00:26 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:57.012 17:00:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.012 17:00:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.012 17:00:26 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:57.012 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:57.012 ' 00:18:58.391 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:58.391 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:58.391 17:00:27 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.391 17:00:27 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100137 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100137 ']' 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100137 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100137 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100137' 00:18:58.391 killing process with pid 100137 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100137 00:18:58.391 17:00:27 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100137 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100137 ']' 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100137 00:18:58.957 17:00:28 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100137 ']' 00:18:58.957 17:00:28 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100137 00:18:58.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100137) - No such process 00:18:58.957 Process with pid 100137 is not found 00:18:58.957 17:00:28 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100137 is not found' 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:58.957 17:00:28 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:58.957 00:18:58.957 real 0m8.041s 00:18:58.957 user 0m17.018s 00:18:58.957 sys 0m1.172s 00:18:58.957 17:00:28 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.957 17:00:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.957 ************************************ 00:18:58.957 END TEST spdkcli_raid 00:18:58.957 ************************************ 00:18:58.957 17:00:28 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:58.957 17:00:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:58.957 17:00:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.957 17:00:28 -- common/autotest_common.sh@10 -- # set +x 00:18:58.957 ************************************ 00:18:58.957 START TEST blockdev_raid5f 00:18:58.957 ************************************ 00:18:58.957 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:58.957 * Looking for test storage... 00:18:58.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:58.957 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:58.957 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:18:58.957 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:59.216 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.216 17:00:28 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:59.216 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.216 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.216 --rc genhtml_branch_coverage=1 00:18:59.216 --rc genhtml_function_coverage=1 00:18:59.216 --rc genhtml_legend=1 00:18:59.216 --rc geninfo_all_blocks=1 00:18:59.216 --rc geninfo_unexecuted_blocks=1 00:18:59.216 00:18:59.216 ' 00:18:59.216 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.216 --rc genhtml_branch_coverage=1 00:18:59.216 --rc genhtml_function_coverage=1 00:18:59.216 --rc genhtml_legend=1 00:18:59.216 --rc geninfo_all_blocks=1 00:18:59.216 --rc geninfo_unexecuted_blocks=1 00:18:59.216 00:18:59.216 ' 00:18:59.216 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.216 --rc genhtml_branch_coverage=1 00:18:59.216 --rc genhtml_function_coverage=1 00:18:59.216 --rc genhtml_legend=1 00:18:59.216 --rc geninfo_all_blocks=1 00:18:59.216 --rc geninfo_unexecuted_blocks=1 00:18:59.216 00:18:59.216 ' 00:18:59.216 17:00:28 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.216 --rc genhtml_branch_coverage=1 00:18:59.216 --rc genhtml_function_coverage=1 00:18:59.216 --rc genhtml_legend=1 00:18:59.216 --rc geninfo_all_blocks=1 00:18:59.216 --rc geninfo_unexecuted_blocks=1 00:18:59.216 00:18:59.216 ' 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:59.216 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100395 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:59.217 17:00:28 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100395 00:18:59.217 17:00:28 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100395 ']' 00:18:59.217 17:00:28 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.217 17:00:28 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.217 17:00:28 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.217 17:00:28 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.217 17:00:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.217 [2024-11-08 17:00:28.620429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:59.217 [2024-11-08 17:00:28.620769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100395 ] 00:18:59.474 [2024-11-08 17:00:28.777445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.474 [2024-11-08 17:00:28.841910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 Malloc0 00:19:00.408 Malloc1 00:19:00.408 Malloc2 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:00:29 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:00.408 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:00.409 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3896200e-a51e-4d3e-a387-66e3167adebb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3896200e-a51e-4d3e-a387-66e3167adebb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3896200e-a51e-4d3e-a387-66e3167adebb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1301354d-8421-4b1e-9307-c31877c9e778",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c58d0639-ba18-43c1-a805-df6c7a051a3c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c552ff3e-1d10-4af6-9060-e861786a8c7f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:00.409 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:00.409 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:00.409 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:00.409 17:00:29 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100395 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100395 ']' 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100395 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100395 00:19:00.409 killing process with pid 100395 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100395' 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100395 00:19:00.409 17:00:29 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100395 00:19:00.976 17:00:30 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:00.976 17:00:30 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:00.976 17:00:30 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:00.976 17:00:30 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.976 17:00:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.976 ************************************ 00:19:00.976 START TEST bdev_hello_world 00:19:00.976 ************************************ 00:19:00.976 17:00:30 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:00.976 [2024-11-08 17:00:30.413668] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:00.976 [2024-11-08 17:00:30.413809] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100435 ] 00:19:01.235 [2024-11-08 17:00:30.581307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.235 [2024-11-08 17:00:30.635521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.494 [2024-11-08 17:00:30.823317] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:01.494 [2024-11-08 17:00:30.823362] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:01.494 [2024-11-08 17:00:30.823386] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:01.494 [2024-11-08 17:00:30.823714] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:01.494 [2024-11-08 17:00:30.823862] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:01.494 [2024-11-08 17:00:30.823879] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:01.494 [2024-11-08 17:00:30.823930] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:01.494 00:19:01.494 [2024-11-08 17:00:30.823958] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:01.754 00:19:01.754 real 0m0.752s 00:19:01.754 user 0m0.423s 00:19:01.754 sys 0m0.214s 00:19:01.754 ************************************ 00:19:01.754 END TEST bdev_hello_world 00:19:01.754 ************************************ 00:19:01.754 17:00:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.754 17:00:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:01.754 17:00:31 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:01.754 17:00:31 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:01.754 17:00:31 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.754 17:00:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:01.754 ************************************ 00:19:01.754 START TEST bdev_bounds 00:19:01.754 ************************************ 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:01.754 Process bdevio pid: 100466 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100466 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100466' 00:19:01.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100466 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100466 ']' 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.754 17:00:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:01.754 [2024-11-08 17:00:31.235249] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:01.754 [2024-11-08 17:00:31.235500] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100466 ] 00:19:02.014 [2024-11-08 17:00:31.402856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.014 [2024-11-08 17:00:31.453342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.014 [2024-11-08 17:00:31.453395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.014 [2024-11-08 17:00:31.453509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.583 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.583 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:02.583 17:00:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:02.842 I/O targets: 00:19:02.842 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:02.842 00:19:02.842 00:19:02.842 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.842 http://cunit.sourceforge.net/ 00:19:02.842 00:19:02.842 00:19:02.842 Suite: bdevio tests on: raid5f 00:19:02.842 Test: blockdev write read block ...passed 00:19:02.842 Test: blockdev write zeroes read block ...passed 00:19:02.842 Test: blockdev write zeroes read no split ...passed 00:19:02.842 Test: blockdev write zeroes read split ...passed 00:19:03.101 Test: blockdev write zeroes read split partial ...passed 00:19:03.101 Test: blockdev reset ...passed 00:19:03.101 Test: blockdev write read 8 blocks ...passed 00:19:03.101 Test: blockdev write read size > 128k ...passed 00:19:03.101 Test: blockdev write read invalid size ...passed 00:19:03.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.101 Test: blockdev write read max offset ...passed 00:19:03.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.101 Test: blockdev writev readv 8 blocks ...passed 00:19:03.101 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.101 Test: blockdev writev readv block ...passed 00:19:03.101 Test: blockdev writev readv size > 128k ...passed 00:19:03.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.101 Test: blockdev comparev and writev ...passed 00:19:03.101 Test: blockdev nvme passthru rw ...passed 00:19:03.101 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.101 Test: blockdev nvme admin passthru ...passed 00:19:03.101 Test: blockdev copy ...passed 00:19:03.101 00:19:03.101 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.101 suites 1 1 n/a 0 0 00:19:03.101 tests 23 23 23 0 0 00:19:03.101 asserts 130 130 130 0 n/a 00:19:03.101 00:19:03.101 Elapsed time = 0.411 seconds 00:19:03.101 0 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100466 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100466 ']' 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100466 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100466 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100466' 00:19:03.101 killing process with pid 100466 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100466 00:19:03.101 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100466 00:19:03.361 17:00:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:03.361 00:19:03.361 real 0m1.573s 00:19:03.361 user 0m3.784s 00:19:03.361 sys 0m0.356s 00:19:03.361 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.361 17:00:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:03.361 ************************************ 00:19:03.361 END TEST bdev_bounds 00:19:03.361 ************************************ 00:19:03.361 17:00:32 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.361 17:00:32 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:03.361 17:00:32 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.361 17:00:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.361 ************************************ 00:19:03.361 START TEST bdev_nbd 00:19:03.361 ************************************ 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100510 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100510 /var/tmp/spdk-nbd.sock 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100510 ']' 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:03.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.361 17:00:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:03.621 [2024-11-08 17:00:32.905463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:03.621 [2024-11-08 17:00:32.905618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.621 [2024-11-08 17:00:33.076367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.621 [2024-11-08 17:00:33.125567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.560 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.560 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:04.560 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:04.561 17:00:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.561 1+0 records in 00:19:04.561 1+0 records out 00:19:04.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579785 s, 7.1 MB/s 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:04.561 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:04.820 { 00:19:04.820 "nbd_device": "/dev/nbd0", 00:19:04.820 "bdev_name": "raid5f" 00:19:04.820 } 00:19:04.820 ]' 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:04.820 { 00:19:04.820 "nbd_device": "/dev/nbd0", 00:19:04.820 "bdev_name": "raid5f" 00:19:04.820 } 00:19:04.820 ]' 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.820 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.077 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.335 17:00:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:05.594 /dev/nbd0 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.594 1+0 records in 00:19:05.594 1+0 records out 00:19:05.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416719 s, 9.8 MB/s 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.594 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:05.853 { 00:19:05.853 "nbd_device": "/dev/nbd0", 00:19:05.853 "bdev_name": "raid5f" 00:19:05.853 } 00:19:05.853 ]' 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:05.853 { 00:19:05.853 "nbd_device": "/dev/nbd0", 00:19:05.853 "bdev_name": "raid5f" 00:19:05.853 } 00:19:05.853 ]' 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:05.853 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:06.112 256+0 records in 00:19:06.112 256+0 records out 00:19:06.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013113 s, 80.0 MB/s 00:19:06.112 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:06.113 256+0 records in 00:19:06.113 256+0 records out 00:19:06.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369918 s, 28.3 MB/s 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.113 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.371 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:06.631 17:00:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:06.890 malloc_lvol_verify 00:19:06.890 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:07.150 1afdd7b0-9fb4-44cb-b269-cb99207289ec 00:19:07.150 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:07.415 db0a5186-83ef-485e-bfde-40a0d9ee4bd6 00:19:07.415 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:07.415 /dev/nbd0 00:19:07.675 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:07.675 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:07.675 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:07.675 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:07.675 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:07.675 mke2fs 1.47.0 (5-Feb-2023) 00:19:07.675 Discarding device blocks: 0/4096 done 00:19:07.675 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:07.675 00:19:07.675 Allocating group tables: 0/1 done 00:19:07.675 Writing inode tables: 0/1 done 00:19:07.675 Creating journal (1024 blocks): done 00:19:07.676 Writing superblocks and filesystem accounting information: 0/1 done 00:19:07.676 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.676 17:00:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100510 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100510 ']' 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100510 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100510 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:07.935 killing process with pid 100510 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100510' 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100510 00:19:07.935 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100510 00:19:08.195 17:00:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:08.195 00:19:08.195 real 0m4.756s 00:19:08.195 user 0m7.010s 00:19:08.195 sys 0m1.350s 00:19:08.195 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.195 17:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:08.195 ************************************ 00:19:08.195 END TEST bdev_nbd 00:19:08.195 ************************************ 00:19:08.195 17:00:37 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:08.195 17:00:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:08.195 17:00:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:08.195 17:00:37 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:08.195 17:00:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.195 17:00:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.195 17:00:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.195 ************************************ 00:19:08.195 START TEST bdev_fio 00:19:08.195 ************************************ 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:08.195 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:08.195 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:08.455 ************************************ 00:19:08.455 START TEST bdev_fio_rw_verify 00:19:08.455 ************************************ 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:08.455 17:00:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:08.715 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:08.715 fio-3.35 00:19:08.715 Starting 1 thread 00:19:20.981 00:19:20.981 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100704: Fri Nov 8 17:00:48 2024 00:19:20.981 read: IOPS=8816, BW=34.4MiB/s (36.1MB/s)(344MiB/10001msec) 00:19:20.981 slat (usec): min=18, max=216, avg=26.89, stdev= 4.75 00:19:20.981 clat (usec): min=12, max=1020, avg=181.66, stdev=69.86 00:19:20.981 lat (usec): min=43, max=1182, avg=208.55, stdev=71.74 00:19:20.981 clat percentiles (usec): 00:19:20.981 | 50.000th=[ 176], 99.000th=[ 302], 99.900th=[ 627], 99.990th=[ 914], 00:19:20.981 | 99.999th=[ 1020] 00:19:20.981 write: IOPS=9251, BW=36.1MiB/s (37.9MB/s)(357MiB/9872msec); 0 zone resets 00:19:20.981 slat (usec): min=9, max=255, avg=23.46, stdev= 5.25 00:19:20.981 clat (usec): min=85, max=742, avg=410.90, stdev=67.18 00:19:20.981 lat (usec): min=108, max=844, avg=434.36, stdev=69.35 00:19:20.981 clat percentiles (usec): 00:19:20.981 | 50.000th=[ 416], 99.000th=[ 537], 99.900th=[ 652], 99.990th=[ 709], 00:19:20.981 | 99.999th=[ 742] 00:19:20.981 bw ( KiB/s): min=33664, max=43176, per=99.11%, avg=36677.89, stdev=3347.20, samples=19 00:19:20.981 iops : min= 8416, max=10794, avg=9169.47, stdev=836.80, samples=19 00:19:20.981 lat (usec) : 20=0.01%, 100=7.21%, 250=32.89%, 500=57.12%, 750=2.74% 00:19:20.981 lat (usec) : 1000=0.03% 00:19:20.981 lat (msec) : 2=0.01% 00:19:20.981 cpu : usr=98.77%, sys=0.48%, ctx=28, majf=0, minf=10692 00:19:20.981 IO depths : 1=7.7%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.981 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.981 issued rwts: total=88172,91332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:20.981 00:19:20.981 Run status group 0 (all jobs): 00:19:20.981 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=344MiB (361MB), run=10001-10001msec 00:19:20.981 WRITE: bw=36.1MiB/s (37.9MB/s), 36.1MiB/s-36.1MiB/s (37.9MB/s-37.9MB/s), io=357MiB (374MB), run=9872-9872msec 00:19:20.981 ----------------------------------------------------- 00:19:20.981 Suppressions used: 00:19:20.981 count bytes template 00:19:20.981 1 7 /usr/src/fio/parse.c 00:19:20.981 424 40704 /usr/src/fio/iolog.c 00:19:20.981 1 8 libtcmalloc_minimal.so 00:19:20.981 1 904 libcrypto.so 00:19:20.981 ----------------------------------------------------- 00:19:20.981 00:19:20.981 00:19:20.981 real 0m11.303s 00:19:20.981 user 0m11.606s 00:19:20.981 sys 0m0.734s 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:20.981 ************************************ 00:19:20.981 END TEST bdev_fio_rw_verify 00:19:20.981 ************************************ 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.981 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3896200e-a51e-4d3e-a387-66e3167adebb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3896200e-a51e-4d3e-a387-66e3167adebb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3896200e-a51e-4d3e-a387-66e3167adebb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1301354d-8421-4b1e-9307-c31877c9e778",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c58d0639-ba18-43c1-a805-df6c7a051a3c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c552ff3e-1d10-4af6-9060-e861786a8c7f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:20.982 /home/vagrant/spdk_repo/spdk 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:20.982 00:19:20.982 real 0m11.610s 00:19:20.982 user 0m11.741s 00:19:20.982 sys 0m0.882s 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:20.982 17:00:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:20.982 ************************************ 00:19:20.982 END TEST bdev_fio 00:19:20.982 ************************************ 00:19:20.982 17:00:49 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:20.982 17:00:49 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:20.982 17:00:49 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:20.982 17:00:49 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:20.982 17:00:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.982 ************************************ 00:19:20.982 START TEST bdev_verify 00:19:20.982 ************************************ 00:19:20.982 17:00:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:20.982 [2024-11-08 17:00:49.392599] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:20.982 [2024-11-08 17:00:49.392780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100858 ] 00:19:20.982 [2024-11-08 17:00:49.564321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.982 [2024-11-08 17:00:49.619128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.982 [2024-11-08 17:00:49.619222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.982 Running I/O for 5 seconds... 00:19:22.488 11070.00 IOPS, 43.24 MiB/s [2024-11-08T17:00:52.953Z] 12085.00 IOPS, 47.21 MiB/s [2024-11-08T17:00:53.891Z] 12275.33 IOPS, 47.95 MiB/s [2024-11-08T17:00:54.832Z] 12429.00 IOPS, 48.55 MiB/s [2024-11-08T17:00:54.832Z] 12204.60 IOPS, 47.67 MiB/s 00:19:25.304 Latency(us) 00:19:25.304 [2024-11-08T17:00:54.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.304 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:25.304 Verification LBA range: start 0x0 length 0x2000 00:19:25.304 raid5f : 5.02 6084.41 23.77 0.00 0.00 31469.02 204.80 26214.40 00:19:25.304 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:25.304 Verification LBA range: start 0x2000 length 0x2000 00:19:25.304 raid5f : 5.01 6087.79 23.78 0.00 0.00 31533.71 388.14 26214.40 00:19:25.304 [2024-11-08T17:00:54.832Z] =================================================================================================================== 00:19:25.304 [2024-11-08T17:00:54.832Z] Total : 12172.21 47.55 0.00 0.00 31501.37 204.80 26214.40 00:19:25.872 00:19:25.872 real 0m5.798s 00:19:25.872 user 0m10.709s 00:19:25.872 sys 0m0.260s 00:19:25.872 17:00:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.872 17:00:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:25.872 ************************************ 00:19:25.872 END TEST bdev_verify 00:19:25.872 ************************************ 00:19:25.872 17:00:55 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:25.872 17:00:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:25.872 17:00:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.872 17:00:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:25.872 ************************************ 00:19:25.872 START TEST bdev_verify_big_io 00:19:25.872 ************************************ 00:19:25.872 17:00:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:25.872 [2024-11-08 17:00:55.245467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:25.872 [2024-11-08 17:00:55.245639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100940 ] 00:19:26.131 [2024-11-08 17:00:55.408095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:26.131 [2024-11-08 17:00:55.461287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.131 [2024-11-08 17:00:55.461404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.393 Running I/O for 5 seconds... 00:19:28.269 756.00 IOPS, 47.25 MiB/s [2024-11-08T17:00:58.735Z] 761.00 IOPS, 47.56 MiB/s [2024-11-08T17:01:00.113Z] 761.33 IOPS, 47.58 MiB/s [2024-11-08T17:01:00.681Z] 777.00 IOPS, 48.56 MiB/s [2024-11-08T17:01:00.940Z] 812.40 IOPS, 50.77 MiB/s 00:19:31.412 Latency(us) 00:19:31.412 [2024-11-08T17:01:00.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.412 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:31.412 Verification LBA range: start 0x0 length 0x200 00:19:31.412 raid5f : 5.11 422.29 26.39 0.00 0.00 7435303.88 169.03 329683.28 00:19:31.412 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:31.412 Verification LBA range: start 0x200 length 0x200 00:19:31.412 raid5f : 5.17 417.53 26.10 0.00 0.00 7558325.16 165.45 338841.15 00:19:31.412 [2024-11-08T17:01:00.940Z] =================================================================================================================== 00:19:31.412 [2024-11-08T17:01:00.940Z] Total : 839.82 52.49 0.00 0.00 7496814.52 165.45 338841.15 00:19:31.671 00:19:31.671 real 0m5.928s 00:19:31.671 user 0m11.015s 00:19:31.671 sys 0m0.231s 00:19:31.671 17:01:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:31.671 17:01:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.671 ************************************ 00:19:31.671 END TEST bdev_verify_big_io 00:19:31.671 ************************************ 00:19:31.671 17:01:01 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:31.671 17:01:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:31.671 17:01:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:31.671 17:01:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.671 ************************************ 00:19:31.671 START TEST bdev_write_zeroes 00:19:31.671 ************************************ 00:19:31.671 17:01:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:31.671 [2024-11-08 17:01:01.196153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:31.671 [2024-11-08 17:01:01.196298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101022 ] 00:19:31.931 [2024-11-08 17:01:01.352784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.931 [2024-11-08 17:01:01.400179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.190 Running I/O for 1 seconds... 00:19:33.191 21999.00 IOPS, 85.93 MiB/s 00:19:33.191 Latency(us) 00:19:33.191 [2024-11-08T17:01:02.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.191 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:33.191 raid5f : 1.01 21953.46 85.76 0.00 0.00 5808.59 1817.26 8471.03 00:19:33.191 [2024-11-08T17:01:02.719Z] =================================================================================================================== 00:19:33.191 [2024-11-08T17:01:02.719Z] Total : 21953.46 85.76 0.00 0.00 5808.59 1817.26 8471.03 00:19:33.449 00:19:33.449 real 0m1.749s 00:19:33.449 user 0m1.412s 00:19:33.449 sys 0m0.215s 00:19:33.449 17:01:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.449 17:01:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:33.449 ************************************ 00:19:33.449 END TEST bdev_write_zeroes 00:19:33.449 ************************************ 00:19:33.449 17:01:02 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.449 17:01:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:33.449 17:01:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.449 17:01:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.449 ************************************ 00:19:33.449 START TEST bdev_json_nonenclosed 00:19:33.449 ************************************ 00:19:33.449 17:01:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.449 [2024-11-08 17:01:02.971072] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:33.449 [2024-11-08 17:01:02.971242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101064 ] 00:19:33.707 [2024-11-08 17:01:03.128201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.707 [2024-11-08 17:01:03.176656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.707 [2024-11-08 17:01:03.176759] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:33.707 [2024-11-08 17:01:03.176792] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:33.707 [2024-11-08 17:01:03.176806] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:33.966 00:19:33.966 real 0m0.399s 00:19:33.966 user 0m0.187s 00:19:33.966 sys 0m0.108s 00:19:33.966 17:01:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.966 17:01:03 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:33.966 ************************************ 00:19:33.966 END TEST bdev_json_nonenclosed 00:19:33.966 ************************************ 00:19:33.966 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.966 17:01:03 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:33.966 17:01:03 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.966 17:01:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.966 ************************************ 00:19:33.966 START TEST bdev_json_nonarray 00:19:33.966 ************************************ 00:19:33.966 17:01:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.966 [2024-11-08 17:01:03.444369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:33.966 [2024-11-08 17:01:03.444529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101084 ] 00:19:34.225 [2024-11-08 17:01:03.601661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.225 [2024-11-08 17:01:03.655804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.225 [2024-11-08 17:01:03.655926] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:34.225 [2024-11-08 17:01:03.655957] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:34.225 [2024-11-08 17:01:03.655980] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:34.484 00:19:34.484 real 0m0.420s 00:19:34.484 user 0m0.195s 00:19:34.484 sys 0m0.122s 00:19:34.484 17:01:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.484 17:01:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 ************************************ 00:19:34.484 END TEST bdev_json_nonarray 00:19:34.484 ************************************ 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:34.484 17:01:03 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:34.484 00:19:34.484 real 0m35.443s 00:19:34.484 user 0m48.678s 00:19:34.484 sys 0m4.680s 00:19:34.484 17:01:03 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.484 17:01:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 ************************************ 00:19:34.484 END TEST blockdev_raid5f 00:19:34.484 ************************************ 00:19:34.484 17:01:03 -- spdk/autotest.sh@194 -- # uname -s 00:19:34.484 17:01:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:34.484 17:01:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.484 17:01:03 -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 17:01:03 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:34.484 17:01:03 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:34.484 17:01:03 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:34.484 17:01:03 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:34.484 17:01:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.484 17:01:03 -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 17:01:03 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:34.484 17:01:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:34.484 17:01:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:34.484 17:01:03 -- common/autotest_common.sh@10 -- # set +x 00:19:35.860 INFO: APP EXITING 00:19:35.860 INFO: killing all VMs 00:19:35.860 INFO: killing vhost app 00:19:35.860 INFO: EXIT DONE 00:19:35.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:36.119 Waiting for block devices as requested 00:19:36.119 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.119 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:36.745 Cleaning 00:19:36.745 Removing: /var/run/dpdk/spdk0/config 00:19:36.745 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:36.745 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:36.745 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:36.745 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:36.745 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:36.745 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:36.745 Removing: /dev/shm/spdk_tgt_trace.pid69109 00:19:36.745 Removing: /var/run/dpdk/spdk0 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100137 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100395 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100435 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100466 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100694 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100858 00:19:36.745 Removing: /var/run/dpdk/spdk_pid100940 00:19:36.745 Removing: /var/run/dpdk/spdk_pid101022 00:19:36.745 Removing: /var/run/dpdk/spdk_pid101064 00:19:36.745 Removing: /var/run/dpdk/spdk_pid101084 00:19:36.745 Removing: /var/run/dpdk/spdk_pid68946 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69109 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69311 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69398 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69427 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69534 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69551 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69739 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69818 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69903 00:19:36.745 Removing: /var/run/dpdk/spdk_pid69992 00:19:36.745 Removing: /var/run/dpdk/spdk_pid70078 00:19:36.745 Removing: /var/run/dpdk/spdk_pid70112 00:19:36.745 Removing: /var/run/dpdk/spdk_pid70154 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70219 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70342 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70767 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70820 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70866 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70877 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70946 00:19:37.003 Removing: /var/run/dpdk/spdk_pid70961 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71022 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71038 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71091 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71109 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71151 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71169 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71301 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71338 00:19:37.003 Removing: /var/run/dpdk/spdk_pid71421 00:19:37.003 Removing: /var/run/dpdk/spdk_pid72597 00:19:37.003 Removing: /var/run/dpdk/spdk_pid72792 00:19:37.003 Removing: /var/run/dpdk/spdk_pid72921 00:19:37.003 Removing: /var/run/dpdk/spdk_pid73530 00:19:37.003 Removing: /var/run/dpdk/spdk_pid73726 00:19:37.003 Removing: /var/run/dpdk/spdk_pid73855 00:19:37.003 Removing: /var/run/dpdk/spdk_pid74461 00:19:37.003 Removing: /var/run/dpdk/spdk_pid74779 00:19:37.003 Removing: /var/run/dpdk/spdk_pid74908 00:19:37.003 Removing: /var/run/dpdk/spdk_pid76238 00:19:37.003 Removing: /var/run/dpdk/spdk_pid76480 00:19:37.003 Removing: /var/run/dpdk/spdk_pid76615 00:19:37.003 Removing: /var/run/dpdk/spdk_pid77950 00:19:37.003 Removing: /var/run/dpdk/spdk_pid78192 00:19:37.003 Removing: /var/run/dpdk/spdk_pid78321 00:19:37.003 Removing: /var/run/dpdk/spdk_pid79662 00:19:37.003 Removing: /var/run/dpdk/spdk_pid80091 00:19:37.003 Removing: /var/run/dpdk/spdk_pid80220 00:19:37.003 Removing: /var/run/dpdk/spdk_pid81655 00:19:37.003 Removing: /var/run/dpdk/spdk_pid81904 00:19:37.003 Removing: /var/run/dpdk/spdk_pid82033 00:19:37.003 Removing: /var/run/dpdk/spdk_pid83474 00:19:37.003 Removing: /var/run/dpdk/spdk_pid83728 00:19:37.003 Removing: /var/run/dpdk/spdk_pid83862 00:19:37.003 Removing: /var/run/dpdk/spdk_pid85298 00:19:37.003 Removing: /var/run/dpdk/spdk_pid85774 00:19:37.003 Removing: /var/run/dpdk/spdk_pid85909 00:19:37.003 Removing: /var/run/dpdk/spdk_pid86036 00:19:37.004 Removing: /var/run/dpdk/spdk_pid86453 00:19:37.004 Removing: /var/run/dpdk/spdk_pid87169 00:19:37.004 Removing: /var/run/dpdk/spdk_pid87564 00:19:37.004 Removing: /var/run/dpdk/spdk_pid88237 00:19:37.004 Removing: /var/run/dpdk/spdk_pid88679 00:19:37.004 Removing: /var/run/dpdk/spdk_pid89426 00:19:37.004 Removing: /var/run/dpdk/spdk_pid89828 00:19:37.004 Removing: /var/run/dpdk/spdk_pid91749 00:19:37.004 Removing: /var/run/dpdk/spdk_pid92180 00:19:37.004 Removing: /var/run/dpdk/spdk_pid92605 00:19:37.004 Removing: /var/run/dpdk/spdk_pid94655 00:19:37.004 Removing: /var/run/dpdk/spdk_pid95134 00:19:37.004 Removing: /var/run/dpdk/spdk_pid95637 00:19:37.004 Removing: /var/run/dpdk/spdk_pid96676 00:19:37.004 Removing: /var/run/dpdk/spdk_pid96988 00:19:37.004 Removing: /var/run/dpdk/spdk_pid97903 00:19:37.004 Removing: /var/run/dpdk/spdk_pid98221 00:19:37.004 Removing: /var/run/dpdk/spdk_pid99151 00:19:37.004 Removing: /var/run/dpdk/spdk_pid99468 00:19:37.004 Clean 00:19:37.004 17:01:06 -- common/autotest_common.sh@1451 -- # return 0 00:19:37.004 17:01:06 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:37.004 17:01:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.004 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:19:37.263 17:01:06 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:37.263 17:01:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.263 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:19:37.263 17:01:06 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:37.263 17:01:06 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:37.263 17:01:06 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:37.263 17:01:06 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:37.263 17:01:06 -- spdk/autotest.sh@394 -- # hostname 00:19:37.263 17:01:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:37.263 geninfo: WARNING: invalid characters removed from testname! 00:20:03.817 17:01:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:05.810 17:01:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:08.345 17:01:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:10.883 17:01:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.787 17:01:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:15.325 17:01:44 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.860 17:01:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:17.860 17:01:47 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:20:17.860 17:01:47 -- common/autotest_common.sh@1681 -- $ lcov --version 00:20:17.860 17:01:47 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:20:18.121 17:01:47 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:20:18.121 17:01:47 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:18.121 17:01:47 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:18.121 17:01:47 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:18.121 17:01:47 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:18.121 17:01:47 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:18.121 17:01:47 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:18.121 17:01:47 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:18.121 17:01:47 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:18.121 17:01:47 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:18.121 17:01:47 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:18.121 17:01:47 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:18.121 17:01:47 -- scripts/common.sh@344 -- $ case "$op" in 00:20:18.121 17:01:47 -- scripts/common.sh@345 -- $ : 1 00:20:18.121 17:01:47 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:18.121 17:01:47 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.121 17:01:47 -- scripts/common.sh@365 -- $ decimal 1 00:20:18.121 17:01:47 -- scripts/common.sh@353 -- $ local d=1 00:20:18.121 17:01:47 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:18.121 17:01:47 -- scripts/common.sh@355 -- $ echo 1 00:20:18.121 17:01:47 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:18.121 17:01:47 -- scripts/common.sh@366 -- $ decimal 2 00:20:18.121 17:01:47 -- scripts/common.sh@353 -- $ local d=2 00:20:18.121 17:01:47 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:18.121 17:01:47 -- scripts/common.sh@355 -- $ echo 2 00:20:18.121 17:01:47 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:18.121 17:01:47 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:18.121 17:01:47 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:18.121 17:01:47 -- scripts/common.sh@368 -- $ return 0 00:20:18.121 17:01:47 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.121 17:01:47 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:20:18.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.121 --rc genhtml_branch_coverage=1 00:20:18.121 --rc genhtml_function_coverage=1 00:20:18.121 --rc genhtml_legend=1 00:20:18.121 --rc geninfo_all_blocks=1 00:20:18.121 --rc geninfo_unexecuted_blocks=1 00:20:18.121 00:20:18.121 ' 00:20:18.121 17:01:47 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:20:18.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.121 --rc genhtml_branch_coverage=1 00:20:18.121 --rc genhtml_function_coverage=1 00:20:18.121 --rc genhtml_legend=1 00:20:18.121 --rc geninfo_all_blocks=1 00:20:18.121 --rc geninfo_unexecuted_blocks=1 00:20:18.121 00:20:18.121 ' 00:20:18.121 17:01:47 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:20:18.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.121 --rc genhtml_branch_coverage=1 00:20:18.121 --rc genhtml_function_coverage=1 00:20:18.121 --rc genhtml_legend=1 00:20:18.121 --rc geninfo_all_blocks=1 00:20:18.121 --rc geninfo_unexecuted_blocks=1 00:20:18.121 00:20:18.121 ' 00:20:18.121 17:01:47 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:20:18.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.121 --rc genhtml_branch_coverage=1 00:20:18.121 --rc genhtml_function_coverage=1 00:20:18.121 --rc genhtml_legend=1 00:20:18.121 --rc geninfo_all_blocks=1 00:20:18.121 --rc geninfo_unexecuted_blocks=1 00:20:18.121 00:20:18.121 ' 00:20:18.121 17:01:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.121 17:01:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:18.121 17:01:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:18.121 17:01:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.121 17:01:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.122 17:01:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.122 17:01:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.122 17:01:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.122 17:01:47 -- paths/export.sh@5 -- $ export PATH 00:20:18.122 17:01:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.122 17:01:47 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:18.122 17:01:47 -- common/autobuild_common.sh@479 -- $ date +%s 00:20:18.122 17:01:47 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731085307.XXXXXX 00:20:18.122 17:01:47 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731085307.N5LEVO 00:20:18.122 17:01:47 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:20:18.122 17:01:47 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:20:18.122 17:01:47 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:20:18.122 17:01:47 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:20:18.122 17:01:47 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:18.122 17:01:47 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:18.122 17:01:47 -- common/autobuild_common.sh@495 -- $ get_config_params 00:20:18.122 17:01:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:18.122 17:01:47 -- common/autotest_common.sh@10 -- $ set +x 00:20:18.122 17:01:47 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:20:18.122 17:01:47 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:20:18.122 17:01:47 -- pm/common@17 -- $ local monitor 00:20:18.122 17:01:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:18.122 17:01:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:18.122 17:01:47 -- pm/common@25 -- $ sleep 1 00:20:18.122 17:01:47 -- pm/common@21 -- $ date +%s 00:20:18.122 17:01:47 -- pm/common@21 -- $ date +%s 00:20:18.122 17:01:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731085307 00:20:18.122 17:01:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731085307 00:20:18.122 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731085307_collect-cpu-load.pm.log 00:20:18.122 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731085307_collect-vmstat.pm.log 00:20:19.060 17:01:48 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:20:19.060 17:01:48 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:19.060 17:01:48 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:19.060 17:01:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:19.060 17:01:48 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:19.060 17:01:48 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:19.318 17:01:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:19.318 17:01:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:19.318 17:01:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:19.318 17:01:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:19.318 17:01:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:19.318 17:01:48 -- pm/common@44 -- $ pid=102566 00:20:19.318 17:01:48 -- pm/common@50 -- $ kill -TERM 102566 00:20:19.318 17:01:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:19.318 17:01:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:19.318 17:01:48 -- pm/common@44 -- $ pid=102568 00:20:19.318 17:01:48 -- pm/common@50 -- $ kill -TERM 102568 00:20:19.318 + [[ -n 6166 ]] 00:20:19.318 + sudo kill 6166 00:20:19.327 [Pipeline] } 00:20:19.343 [Pipeline] // timeout 00:20:19.349 [Pipeline] } 00:20:19.364 [Pipeline] // stage 00:20:19.370 [Pipeline] } 00:20:19.385 [Pipeline] // catchError 00:20:19.395 [Pipeline] stage 00:20:19.397 [Pipeline] { (Stop VM) 00:20:19.409 [Pipeline] sh 00:20:19.692 + vagrant halt 00:20:22.320 ==> default: Halting domain... 00:20:30.453 [Pipeline] sh 00:20:30.735 + vagrant destroy -f 00:20:34.025 ==> default: Removing domain... 00:20:34.037 [Pipeline] sh 00:20:34.320 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:34.327 [Pipeline] } 00:20:34.342 [Pipeline] // stage 00:20:34.346 [Pipeline] } 00:20:34.360 [Pipeline] // dir 00:20:34.365 [Pipeline] } 00:20:34.379 [Pipeline] // wrap 00:20:34.385 [Pipeline] } 00:20:34.398 [Pipeline] // catchError 00:20:34.407 [Pipeline] stage 00:20:34.409 [Pipeline] { (Epilogue) 00:20:34.421 [Pipeline] sh 00:20:34.704 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:39.995 [Pipeline] catchError 00:20:39.997 [Pipeline] { 00:20:40.010 [Pipeline] sh 00:20:40.305 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:40.305 Artifacts sizes are good 00:20:40.314 [Pipeline] } 00:20:40.328 [Pipeline] // catchError 00:20:40.338 [Pipeline] archiveArtifacts 00:20:40.346 Archiving artifacts 00:20:40.444 [Pipeline] cleanWs 00:20:40.455 [WS-CLEANUP] Deleting project workspace... 00:20:40.455 [WS-CLEANUP] Deferred wipeout is used... 00:20:40.461 [WS-CLEANUP] done 00:20:40.463 [Pipeline] } 00:20:40.479 [Pipeline] // stage 00:20:40.484 [Pipeline] } 00:20:40.500 [Pipeline] // node 00:20:40.506 [Pipeline] End of Pipeline 00:20:40.558 Finished: SUCCESS